What are the practical outcomes of those hypotheses being true or false?

 

My editorial on You Tube

 

This is one of those moments when I need to reassess what the hell I am doing. Scientifically, I mean. Of course, it is good to reassess things existentially, too, every now and then, but for the moment I am limiting myself to science. Simpler and safer than life in general. Anyway, I have a financial scheme in mind, where local crowdfunding platforms serve to support the development of local suppliers in renewable energies. The scheme is based on the observable difference between prices of electricity for small users (higher), and those reserved to industrial scale users (lower). I wonder if small consumers would be ready to pay the normal, relatively higher price in exchange of a package made of: a) electricity and b) shares in the equity of its suppliers.

I have a general, methodological hypothesis in mind, which I have been trying to develop over the last 2 years or so: collective intelligence. I hypothesise that collective behaviour observable in markets can be studied as a manifestation of collective intelligence. The purpose is to go beyond optimization and to define, with scientific rigour, what are the alternative, essentially equiprobable paths of change that a complex market can take. I think such an approach is useful when I am dealing with an economic model with a lot of internal correlation between variables, and that correlation can be so strong that it turns into those variables basically looping on each other. In such a situation, distinguishing independent variables from the dependent ones becomes bloody hard, and methodologically doubtful.

On the grounds of literature, and my own experimentation, I have defined three essential traits of such collective intelligence: a) distinction between structure and instance b) capacity to accumulate experience, and c) capacity to pass between different levels of freedom in social cohesion. I am using an artificial neural network, a multi-layer perceptron, in order to simulate such collectively intelligent behaviour.

The distinction between structure and instance means that we can devise something, make different instances of that something, each different by some small details, and experiment with those different instances in order to devise an even better something. When I make a mechanical clock, I am a clockmaker. When I am able to have a critical look at this clock, make many different versions of it – all based on the same structural connections between mechanical parts, but differing from each other by subtle details – and experiment with those multiple versions, I become a meta-clock-maker, i.e. someone who can advise clockmakers on how to make clocks. The capacity to distinguish between structures and their instances is one of the basic skills we need in life. Autistic people have a big problem in that department, as they are mostly on the instance side. To a severely autistic person, me in a blue jacket, and me in a brown jacket are two completely different people. Schizophrenic people are on the opposite end of the spectrum. To them, everything is one and the same structure, and they cannot cope with instances. Me in a blue jacket and me in a brown jacket are the same as my neighbour in a yellow jumper, and we all are instances of the same alien monster. I know you think I might be overstating, but my grandmother on the father’s side used to suffer from schizophrenia, and it was precisely that: to her, all strong smells were the manifestation of one and the same volatile poison sprayed in the air by THEM, and every person outside a circle of about 19 people closest to her was a member of THEM. Poor Jadwiga.

In economics, the distinction between structure and instance corresponds to the tension between markets and their underpinning institutions. Markets are fluid and changeable, they are like constant experimenting. Institutions give some gravitas and predictability to that experimenting. Institutions are structures, and markets are ritualized manners of multiplying and testing many alternative instances of those structures.

The capacity to accumulate experience means that as we experiment with different instances of different structures, we can store information we collect in the process, and use this information in some meaningful way. My great compatriot, Alfred Korzybski, in his general semantics, used to designate it as ‘the capacity to bind time’. The thing is not as obvious as one could think. A Nobel-prized mathematician, Reinhard Selten, coined up the concept of social games with imperfect recall (Harsanyi, Selten 1988[1]). He argued that as we, collective humans, accumulate and generalize experience about what the hell is going on, from time to time we shake off that big folder, and pick the pages endowed with the most meaning. All the remaining stuff, judged less useful on the moment, is somehow archived in culture, so as it basically stays there, but becomes much harder to access and utilise. The capacity to accumulate experience means largely the way of accumulating experience, and doing that from-time-to-time archiving. We can observe this basic distinction in everyday life. There are things that we learn sort of incrementally. When I learn to play piano – which I wish I was learning right now, cool stuff – I practice, I practice, I practice and… I accumulate learning from all those practices, and one day I give a concert, in a pub. Still, other things, I learn them sort of haphazardly. Relationships are a good example. I am with someone, one day I am mad at her, the other day I see her as the love of my life, then, again, she really gets on my nerves, and then I think I couldn’t live without her etc. Bit of a bumpy road, isn’t it? Yes, there is some incremental learning, but you become aware of it after like 25 years of conjoint life. Earlier on, you just need to suck ass and keep going.

There is an interesting theory in economics, labelled as « semi – martingale » (see for example: Malkiel, Fama 1970[2]). When we observe changes in stock prices, in a capital market, we tend to say they are random, but they are not. You can test it. If the price is really random, it should fan out according to the pattern of normal distribution. This is what we call a full martingale. Any real price you observe actually swings less broadly than normal distribution: this is a semi-martingale. Still, anyone with any experience in investment knows that prediction inside the semi-martingale is always burdened with a s**tload of error. When you observe stock prices over a long time, like 2 or 3 years, you can see a sequence of distinct semi-martingales. From September through December it swings inside one semi-martingale, then the Ghost of Past Christmases shakes it badly, people panic, and later it settles into another semi-martingale, slightly shifted from the preceding one, and here it goes, semi-martingaling for another dozen of weeks etc.

The central theoretical question in this economic theory, and a couple of others, spells: do we learn something durable through local shocks? Does a sequence of economic shocks, of whatever type, make a learning path similar to the incremental learning of piano playing? There are strong arguments in favour of both possible answers. If you get your face punched, over and over again, you must be a really dumb asshole not to learn anything from that. Still, there is that phenomenon called systemic homeostasis: many systems, social structures included, tend to fight for stability when shaken, and they are frequently successful. The memory of shocks and revolutions is frequently erased, and they are assumed to have never existed.

The issue of different levels in social cohesion refers to the so-called swarm theory (Stradner et al 2013[3]). This theory studies collective intelligence by reference to animals, which we know are intelligent just collectively. Bees, ants, hornets: all those beasts, when acting individually, as dumb as f**k. Still, when they gang up, they develop amazingly complex patterns of action. That’s not all. Those complex patterns of theirs fall into three categories, applicable to human behaviour as well: static coupling, dynamic correlated coupling, and dynamic random coupling.

When we coordinate by static coupling, we always do things together in the same way. These are recurrent rituals, without much room for change. Many legal rules, and institutions they form the basis of, are examples of static coupling. You want to put some equity-based securities in circulation? Good, you do this, and this, and this. You haven’t done the third this? Sorry, man, but you cannot call it a day yet. When we need to change the structure of what we do, we should somehow loosen that static coupling and try something new. We should dissolve the existing business, which is static coupling, and look for creating something new. When we do so, we can sort of stay in touch with our customary business partners, and after some circling and asking around we form a new business structure, involving people we clearly coordinate with. This is dynamic correlated coupling. Finally, we can decide to sail completely uncharted waters, and take our business concept to China, or to New Zealand, and try to work with completely different people. What we do, in such a case, is emitting some sort of business signal into the environment, and waiting for any response from whoever is interested. This is dynamic random coupling. Attracting random followers to a new You Tube channel is very much an example of the same.

At the level of social cohesion, we can be intelligent in two distinct ways. On the one hand, we can keep the given pattern of collective associations behaviour at the same level, i.e. one of the three I have just mentioned. We keep it ritualized and static, or somehow loose and dynamically correlated, or, finally, we take care of not ritualizing too much and keep it deliberately at the level of random associations. On the other hand, we can shift between different levels of cohesion. We take some institutions, we start experimenting with making them more flexible, at some point we possibly make it as free as possible, and we gain experience, which, in turn, allows us to create new institutions.

When applying the issue of social cohesion in collective intelligence to economic phenomena, we can use a little trick, to be found, for example, in de Vincenzo et al (2018[4]): we assume that quantitative economic variables, which we normally perceive as just numbers, are manifestations of distinct collective decisions. When I have the price of energy, let’s say, €0,17 per kilowatt hour, I consider it as the outcome of collective decision-making. At this point, it is useful to remember the fundamentals of intelligence. We perceive our own, individual decisions as outcomes of our independent thinking. We associate them with the fact of wanting something, and being apprehensive regarding something else etc. Still, neurologically, those decisions are outcomes of some neurons firing in a certain sequence. Same for economic variables, i.e. mostly prices and quantities: they are fruit of interactions between the members of a community. When I buy apples in the local marketplace, I just buy them for a certain price, and, if they look bad, I just don’t buy. This is not any form of purposeful influence upon the market. Still, when 10 000 people like me do the same, sort of ‘buy when price good, don’t when the apple is bruised’, a patterned process emerges. The resulting price of apples is the outcome of that process.

Social cohesion can be viewed as association between collective decisions, not just between individual actions. The resulting methodology is made, roughly speaking, of three steps. Step one: I put all the economic variables in my model over a common denominator (common scale of measurement). Step two: I calculate the relative cohesion between them with the general concept of a fitness function, which I can express, for example, as the Euclidean distance between local values of variables in question. Step three: I calculate the average of those Euclidean distances, and I calculate its reciprocal, like « 1/x ». This reciprocal is the direct measure of cohesion between decisions, i.e. the higher the value of this precise « 1/x », the more cohesion between different processes of economic decision-making.

Now, those of you with a sharp scientific edge could say now: “Wait a minute, doc. How do you know we are talking about different processes of decision making? Who do you know that variable X1 comes from a different process than variable X2?”. This is precisely my point. The swarm theory tells me that if I can observe changing a cohesion between those variables, I can reasonably hypothesise that their underlying decision-making processes are distinct. If, on the other hand, their mutual Euclidean distance stays the same, I hypothesise that they come from the same process.

Summing up, here is the general drift: I take an economic model and I formulate three hypotheses as for the occurrence of collective intelligence in that model. Hypothesis #1: different variables of the model come from different processes of collective decision-making.

Hypothesis #2: the economic system underlying the model has the capacity to learn as a collective intelligence, i.e. to durably increase or decrease the mutual cohesion between those processes. Hypothesis #3: collective learning in the presence of economic shocks is different from the instance of learning in the absence of such shocks.

They look nice, those hypotheses. Now, why the hell should anyone bother? I mean what are the practical outcomes of those hypotheses being true or false? In my experimental perceptron, I express the presence of economic shocks by using hyperbolic tangent as neural function of activation, whilst the absence of shocks (or the presence of countercyclical policies) is expressed with a sigmoid function. Those two yield very different processes of learning. Long story short, the sigmoid learns more, i.e. it accumulates more local errors (this more experimental material for learning), and it generates a steady trend towards lower a cohesion between variables (decisions). The hyperbolic tangent accumulates less experiential material (it learns less), and it is quite random in arriving to any tangible change in cohesion. The collective intelligence I mimicked with that perceptron looks like the kind of intelligence, which, when going through shocks, learns only the skill of returning to the initial position after shock: it does not create any lasting type of change. The latter happens only when my perceptron has a device to absorb and alleviate shocks, i.e. the sigmoid neural function.

When I have my perceptron explicitly feeding back that cohesion between variables (i.e. feeding back the fitness function considered as a local error), it learns less and changes less, but not necessarily goes through less shocks. When the perceptron does not care about feeding back the observable distance between variables, there is more learning and more change, but not more shocks. The overall fitness function of my perceptron changes over time The ‘over time’ depends on the kind of neural activation function I use. In the case of hyperbolic tangent, it is brutal change over a short time, eventually coming back to virtually the same point that it started from. In the hyperbolic tangent, the passage between various levels of association, according to the swarm theory, is super quick, but not really productive. In the sigmoid, it is definitely a steady trend of decreasing cohesion.

I want to know what the hell I am doing. I feel I have made a few steps towards that understanding, but getting to know what I am doing proves really hard.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Harsanyi, J. C., & Selten, R. (1988). A general theory of equilibrium selection in games. MIT Press Books, 1.

[2] Malkiel, B. G., & Fama, E. F. (1970). Efficient capital markets: A review of theory and empirical work. The journal of Finance, 25(2), 383-417.

[3] Stradner, J., Thenius, R., Zahadat, P., Hamann, H., Crailsheim, K., & Schmickl, T. (2013). Algorithmic requirements for swarm intelligence in differently coupled collective systems. Chaos, Solitons & Fractals, 50, 100-114.

[4] De Vincenzo, I., Massari, G. F., Giannoccaro, I., Carbone, G., & Grigolini, P. (2018). Mimicking the collective intelligence of human groups as an optimization tool for complex problems. Chaos, Solitons & Fractals, 110, 259-266.

How can I possibly learn on that thing I have just become aware I do?

 

My editorial on You Tube

 

I keep working on the application of neural networks to simulate the workings of collective intelligence in humans. I am currently macheting my way through the model proposed by de Vincenzo et al in their article entitled ‘Mimicking the collective intelligence of human groups as an optimization tool for complex problems’ (2018[1]). In the spirit of my own research, I am trying to use optimization tools for a slightly different purpose, that is for simulating the way things are done. It usually means that I relax some assumptions which come along with said optimization tools, and I just watch what happens.

Vincenzo et al propose a model of artificial intelligence, which combines a classical perceptron, such as the one I have already discussed on this blog (see « More vigilant than sigmoid », for example) with a component of deep learning based on the observable divergences in decisions. In that model, social agents strive to minimize their divergences and to achieve relative consensus. Mathematically, it means that each decision is characterized by a fitness function, i.e. a function of mathematical distance from other decisions made in the same population.

I take the tensors I have already been working with, namely the input tensor TI = {LCOER, LCOENR, KR, KNR, IR, INR, PA;R, PA;NR, PB;R, PB;NR} and the output tensor is TO = {QR/N; QNR/N}. Once again, consult « More vigilant than sigmoid » as for the meaning of those variables. In the spirit of the model presented by Vincenzo et al, I assume that each variable in my tensors is a decision. Thus, for example, PA;R, i.e. the basic price of energy from renewable sources, which small consumers are charged with, is the tangible outcome of a collective decision. Same for the levelized cost of electricity from renewable sources, the LCOER, etc. For each i-th variable xi in TI and TO, I calculate its relative fitness to the overall universe of decisions, as the average of itself, and of its Euclidean distances to other decisions. It looks like:

 

V(xi) = (1/N)*{xi + [(xi – xi;1)2]0,5 + [(xi – xi;2)2]0,5 + … + [(xi – xi;K)2]0,5}

 

…where N is the total number of variables in my tensors, and K = N – 1.

 

In a next step, I can calculate the average of averages, thus to sum up all the individual V(xi)’s and divide that total by N. That average V*(x) = (1/N) * [V(x1) + V(x2) + … + V(xN)] is the measure of aggregate divergence between individual variables considered as decisions.

Now, I imagine two populations: one who actively learns from the observed divergence of decisions, and another one who doesn’t really. The former is represented with a perceptron that feeds back the observable V(xi)’s into consecutive experimental rounds. Still, it is just feeding that V(xi) back into the loop, without any a priori ideas about it. The latter is more or less what it already is: it just yields those V(xi)’s but does not do much about them.

I needed a bit of thinking as for how exactly should that feeding back of fitness function look like. In the algorithm I finally came up with, it looks differently for the input variables on the one hand, and for the output ones. You might remember, from the reading of « More vigilant than sigmoid », that my perceptron, in its basic version, learns by estimating local errors observed in the last round of experimentation, and then adding those local errors to the values of input variables, just to make them roll once again through the neural activation function (sigmoid or hyperbolic tangent), and see what happens.

As I upgrade my perceptron with the estimation of fitness function V(xi), I ask: who estimates the fitness function? What kind of question is that? Well, a basic one. I have that neural network, right? It is supposed to be intelligent, right? I add a function of intelligence, namely that of estimating the fitness function. Who is doing the estimation: my supposedly intelligent network or some other intelligent entity? If it is an external intelligence, mine, for a start, it just estimates V(xi), sits on its couch, and watches the perceptron struggling through the meanders of attempts to be intelligent. In such a case, the fitness function is like sweat generated by a body. The body sweats but does not have any way of using the sweat produced.

Now, if the V(xi) is to be used for learning, the perceptron is precisely the incumbent intelligent structure supposed to use it. I see two basic ways for the perceptron to do that. First of all, the input neuron of my perceptron can capture the local fitness functions on input variables and add them, as additional information, to the previously used values of input variables. Second of all, the second hidden neuron can add the local fitness functions, observed on output variables, to the exponent of the neural activation function.

I explain. I am a perceptron. I start my adventure with two tensors: input TI = {LCOER, LCOENR, KR, KNR, IR, INR, PA;R, PA;NR, PB;R, PB;NR} and output TO = {QR/N; QNR/N}. The initial values I start with are slightly modified in comparison to what was being processed in « More vigilant than sigmoid ». I assume that the initial market of renewable energies – thus most variables of quantity with ‘R’ in subscript – is quasi inexistent. More specifically, QR/N = 0,01 and  QNR/N = 0,99 in output variables, whilst in the input tensor I have capital invested in capacity IR = 0,46 (thus a readiness to go and generate from renewables), and yet the crowdfunding flow K is KR = 0,01 for renewables and KNR = 0,09 for non-renewables. If you want, it is a sector of renewable energies which is sort of ready to fire off but hasn’t done anything yet in that department. All in all, I start with: LCOER = 0,26; LCOENR = 0,48; KR = 0,01; KNR = 0,09; IR = 0,46; INR = 0,99; PA;R = 0,71; PA;NR = 0,46; PB;R = 0,20; PB;NR = 0,37; QR/N = 0,01; and QNR/N = 0,99.

Being a pure perceptron, I am dumb as f**k. I can learn by pure experimentation. I have ambitions, though, to be smarter, thus to add some deep learning to my repertoire. I estimate the relative mutual fitness of my variables according to the V(xi) formula given earlier, as arithmetical average of each variable separately and its Euclidean distance to others. With the initial values as given, I observe: V(LCOER; t0) = 0,302691788; V(LCOENR; t0) = 0,310267104; V(KR; t0) = 0,410347388; V(KNR; t0) = 0,363680721; V(IR ; t0) = 0,300647174; V(INR ; t0) = 0,652537097; V(PA;R ; t0) = 0,441356844 ; V(PA;NR ; t0) = 0,300683099 ; V(PB;R ; t0) = 0,316248176 ; V(PB;NR ; t0) = 0,293252713 ; V(QR/N ; t0) = 0,410347388 ; and V(QNR/N ; t0) = 0,570485945. All that stuff put together into an overall fitness estimation is like average V*(x; t0) = 0,389378787.

I ask myself: what happens to that fitness function when as I process information with my two alternative neural functions, the sigmoid or the hyperbolic tangent. I jump to experimental round 1500, thus to t1500, and I watch. With the sigmoid, I have V(LCOER; t1500) =  0,359529289 ; V(LCOENR; t1500) =  0,367104605; V(KR; t1500) =  0,467184889; V(KNR; t1500) = 0,420518222; V(IR ; t1500) =  0,357484675; V(INR ; t1500) =  0,709374598; V(PA;R ; t1500) =  0,498194345; V(PA;NR ; t1500) =  0,3575206; V(PB;R ; t1500) =  0,373085677; V(PB;NR ; t1500) =  0,350090214; V(QR/N ; t1500) =  0,467184889; and V(QNR/N ; t1500) = 0,570485945, with average V*(x; t1500) =  0,441479829.

Hmm, interesting. Working my way through intelligent cognition with a sigmoid, after 1500 rounds of experimentation, I have somehow decreased the mutual fitness of decisions I make through individual variables. Those V(xi)’s have changed. Now, let’s see what it gives when I do the same with the hyperbolic tangent: V(LCOER; t1500) =   0,347752478; V(LCOENR; t1500) =  0,317803169; V(KR; t1500) =   0,496752021; V(KNR; t1500) = 0,436752021; V(IR ; t1500) =  0,312040791; V(INR ; t1500) =  0,575690006; V(PA;R ; t1500) =  0,411438698; V(PA;NR ; t1500) =  0,312052766; V(PB;R ; t1500) = 0,370346458; V(PB;NR ; t1500) = 0,319435252; V(QR/N ; t1500) =  0,496752021; and V(QNR/N ; t1500) = 0,570485945, with average V*(x; t1500) =0,413941802.

Well, it is becoming more and more interesting. Being a dumb perceptron, I can, nevertheless, create two different states of mutual fitness between my decisions, depending on the kind of neural function I use. I want to have a bird’s eye view on the whole thing. How can a perceptron have a bird’s eye view of anything? Simple: it rents a drone. How can a perceptron rent a drone? Well, how smart do you have to be to rent a drone? Anyway, it gives something like the graph below:

 

Wow! So this is what I do, as a perceptron, and what I haven’t been aware so far? Amazing. When I think in sigmoid, I sort of consistently increase the relative distance between my decisions, i.e. I decrease their mutual fitness. The sigmoid, that function which sorts of calms down any local disturbance, leads to making a decision-making process like less coherent, more prone to embracing a little chaos. The hyperbolic tangent thinking is different. It occasionally sort of stretches across a broader spectrum of fitness in decisions, but as soon as it does so, it seems being afraid of its own actions, and returns to the initial level of V*(x). Please, note that as a perceptron, I am almost alive, and I produce slightly different outcomes in each instance of myself. The point is that in the line corresponding to hyperbolic tangent, the comb-like pattern of small oscillations can stretch and move from instance to instance. Still, it keeps the general form of a comb.

OK, so this is what I do, and now I ask myself: how can I possibly learn on that thing I have just become aware I do? As a perceptron, endowed with this precise logical structure, I can do one thing with information: I can arithmetically add it to my input. Still, having some ambitions for evolving, I attempt to change my logical structure, and I risk myself into incorporating somehow the observable V(xi) into my neural activation function. Thus, the first thing I do with that new learning is to top the values of input variables with local fitness functions observed in the previous round of experimenting. I am doing it already with local errors observed in outcome variables, so why not doubling the dose of learning? Anyway, it goes like: xi(t0) = xi(t-1) + e(xi; t-1) + V(xi; t-1). It looks interesting, but I am still using just a fraction of information about myself, i.e. just that about input variables. Here is where I start being really ambitious. In the equation of the sigmoid function, I change s = 1 / [1 + exp(∑xi*Wi)] into s = 1 / [1 + exp(∑xi*Wi + V(To)], where V(To) stands for local fitness functions observed in output  variables. I do the same by analogy in my version based on hyperbolic tangent. The th = [exp(2*∑xi*wi)-1] / [exp(2*∑xi*wi) + 1] turns into th = {exp[2*∑xi*wi + V(To)] -1} / {exp[2*∑xi*wi + V(To)] + 1}. I do what I know how to do, i.e. adding information from fresh observation, and I apply it to change the structure of my neural function.

All those ambitious changes in myself, put together, change my pattern of learing as shown in the graph below:

When I think sigmoid, the fact of feeding back my own fitness function does not change much. It makes the learning curve a bit steeper in the early experimental rounds, and makes it asymptotic to a little lower threshold in the last rounds, as compared to learning without feedback on V(xi). Yet, it is the same old sigmoid, with just its sleeves ironed. On the other hand, the hyperbolic tangent thinking changes significantly. What used to look like a comb, without feedback, now looks much more aggressive, like a plough on steroids. There is something like a complex cycle of learning on the internal cohesion of decisions made. Generally, feeding back the observable V(xi) increases the finally achieved cohesion in decisions, and, in the same time, it reduces the cumulative error gathered by the perceptron. With that type of feedback, the cumulative error of the sigmoid, which normally hits around 2,2 in this case, falls to like 0,8. With hyperbolic tangent, cumulative errors which used to be 0,6 ÷ 0,8 without feedback, fall to 0,1 ÷ 0,4 with feedback on V(xi).

 

The (provisional) piece of wisdom I can have as my takeaway is twofold. Firstly, whatever I do, a large chunk of perceptual learning leads to a bit less cohesion in my decisions. As I learn by experience, I allow myself more divergence in decisions. Secondly, looping on that divergence, and including it explicitly in my pattern of learning leads to relatively more cohesion at the end of the day. Still, more cohesion has a price – less learning.

 

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] De Vincenzo, I., Massari, G. F., Giannoccaro, I., Carbone, G., & Grigolini, P. (2018). Mimicking the collective intelligence of human groups as an optimization tool for complex problems. Chaos, Solitons & Fractals, 110, 259-266.

More vigilant than sigmoid

My editorial on You Tube

 

I keep working on the application of neural networks as simulators of collective intelligence. The particular field of research I am diving into is the sector of energy, its shift towards renewable energies, and the financial scheme I invented some time ago, which I called EneFin. As for that last one, you can consult « The essential business concept seems to hold », in order to grasp the outline.

I continue developing the line of research I described in my last update in French: « De la misère, quoi ». There are observable differences in the prices of energy according to the size of the buyer. In many countries – practically in all the countries of Europe – there are two, distinct price brackets. One, which I further designated as PB, is reserved to contracts with big consumers of energy (factories, office buildings etc.) and it is clearly lower. Another one, further called PA, is applied to small buyers, mainly households and really small businesses.

As an economist, I have that intuitive thought in the presence of price forks: that differential in prices is some kind of value. If it is value, why not giving it some financial spin? I came up with the idea of the EneFin contract. People buy energy from a local supplier, in the amount Q, who sources it from renewables (water, wind etc.), and they pay the price PA, thus generating a financial flow equal to Q*PA. That flow buys two things: energy priced at PB, and participatory titles in the capital of their supplier, for the differential Q*(PA – PB). I imagine some kind of crowdfunding platform, which could channel the amount of capital K = Q*(PA – PB).

That K remains in some sort of fluid relationship to I, or capital invested in the productive capacity of energy suppliers. Fluid relationship means that each of those capital balances can date other capital balances, no hard feelings held. As we talk (OK, I talk) about prices of energy and capital invested in capacity, it is worth referring to LCOE, or Levelized Cost Of Electricity. The LCOE is essentially the marginal cost of energy, and a no-go-below limit for energy prices.

I want to simulate the possible process of introducing that general financial concept, namely K = Q*(PA – PB), into the market of energy, in order to promote the development of diversified networks, made of local suppliers in renewable energy.

Here comes my slightly obsessive methodological idea: use artificial intelligence in order to simulate the process. In classical economic method, I make a model, I take empirical data, I regress some of it on another some of it, and I come up with coefficients of regression, and they tell me how the thing should work if we were living in a perfect world. Artificial intelligence opens a different perspective. I can assume that my model is a logical structure, which keeps experimenting with itself and we don’t the hell know where exactly that experimentation leads. I want to use neural networks in order to represent the exact way that social structures can possibly experiment with that K = Q*(PA – PB) thing. Instead of optimizing, I want to see that way that possible optimization can occur.

I have that simple neural network, which I already referred to in « The point of doing manually what the loop is supposed to do » and which is basically quite dumb, as it does not do any abstraction. Still, it nicely experiments with logical structures. I am sketching its logical structure in the picture below. I distinguish four layers of neurons: input, hidden 1, hidden 2, and output. When I say ‘layers’, it is a bit of grand language. For the moment, I am working with one single neuron in each layer. It is more of a synaptic chain.

Anyway, the input neuron feeds data into the chain. In the first round of experimentation, it feeds the source data in. In consecutive rounds of learning through experimentation, that first neuron assesses and feeds back local errors, measured as discrepancies between the output of the output neuron, and the expected values of output variables. The input neuron is like the first step in a chain of perception, in a nervous system: it receives and notices the raw external information.

The hidden layers – or the hidden neurons in the chain – modify the input data. The first hidden neuron generates quasi-random weights, which the second hidden neuron attributes to the input variables. Just as in a nervous system, the input stimuli are assessed as for their relative importance. In the original algorithm of perceptron, which I used to design this network, those two functions, i.e. generating the random weights and attributing them to input variables, were fused in one equation. Still, my fundamental intent is to use neural networks to simulate collective intelligence, and intuitively guess those two functions are somehow distinct. Pondering the importance of things is one action and using that ponderation for practical purposes is another. It is like scientist debating about the way to run a policy, and the government having the actual thing done. These are two separate paths of action.

Whatever. What the second hidden neuron produces is a compound piece of information: the summation of input variables multiplied by random weights. The output neuron transforms this compound data through a neural function. I prepared two versions of this network, with two distinct neural functions: the sigmoid, and the hyperbolic tangent. As I found out, the way they work is very different, just as the results they produce. Once the output neuron generates the transformed data – the neural output – the input neuron measures the discrepancy between the original, expected values of output variables, and the values generated by the output neuron. The exact way of computing that discrepancy is made of two operations: calculating the local derivative of the neural function, and multiplying that derivative by the residual difference ‘original expected output value minus output value generated by the output neuron’. The so calculated discrepancy is considered as a local error, and is being fed back into the input neuron as an addition to the value of each input variable.

Before I go into describing the application I made of that perceptron, as regards my idea for financial scheme, I want to delve into the mechanism of learning triggered through repeated looping of that logical structure. The input neuron measures the arithmetical difference between the output of the network in the preceding round of experimentation, and that difference is being multiplied by the local derivative of said output. Derivative functions, in their deepest, Newtonian sense, are magnitudes of change in something else, i.e. in their base function. In the Newtonian perspective, everything that happens can be seen either as change (derivative) in something else, or as an integral (an aggregate that changes its shape) of still something else. When I multiply the local deviation from expected values by the local derivative of the estimated value, I assume this deviation is as important as the local magnitude of change in its estimation. The faster things happen, the more important they are, so do say. My perceptron learns by assessing the magnitude of local changes it induces in its own estimations of reality.

I took that general logical structure of the perceptron, and I applied it to my core problem, i.e. the possible adoption of the new financial scheme to the market of energy. Here comes sort of an originality in my approach. The basic way of using neural networks is to give them a substantial set of real data as learning material, make them learn on that data, and then make them optimize a hypothetical set of data. Here you have those 20 old cars, take them into pieces and try to put them back together, observe all the anomalies you have thus created, and then make me a new car on the grounds of that learning. I adopted a different approach. My focus is to study the process of learning in itself. I took just one set of actual input values, exogenous to my perceptron, something like an initial situation. I ran 5000 rounds of learning in the perceptron, on the basis of that initial set of values, and I observed how is learning taking place.

My initial set of data is made of two tensors: input TI and output TO.

The thing I am the most focused on is the relative abundance of energy supplied from renewable sources. I express the ‘abundance’ part mathematically as the coefficient of energy consumed per capita, or Q/N. The relative bend towards renewables, or towards the non-renewables is apprehended as the distinction between renewable energy QR/N consumed per capita, and the non-renewable one, the QNR/N, possibly consumed by some other capita. Hence, my output tensor is TO = {QR/N; QNR/N}.

I hypothesise that TO is being generated by input made of prices, costs, and capital outlays. I split my price fork PA – PB (price for the big ones minus price for the small ones) into renewables and non-renewables, namely into: PA;R, PA;NR, PB;R, and PB;NR. I mirror the distinction in prices with that in the cost of energy, and so I call LCOER and LCOENR. I want to create a financial scheme that generates a crowdfunded stream of capital K, to finance new productive capacities, and I want it to finance renewable energies, and I call KR. Still, some other people, like my compatriots in Poland, might be so attached to fossils they might be willing to crowdfund new installations based on non-renewables. Thus, I need to take into account a KNR in the game. When I say capital, and I say LCOE, I sort of feel compelled to say aggregate investment in productive capacity, in renewables, and in non-renewables, and I call it, respectively, IR and INR. All in all, my input tensor spells TI = {LCOER, LCOENR, KR, KNR, IR, INR, PA;R, PA;NR, PB;R, PB;NR}.

The next step is scale and measurement. The neural functions I use in my perceptron like having their input standardized. Their tastes in standardization differ a little. The sigmoid likes it nicely spread between 0 and 1, whilst the hyperbolic tangent, the more reckless of the two, tolerates (-1) ≥ x ≥ 1. I chose to standardize the input data between 0 and 1, so as to make it fit into both. My initial thought was to aim for an energy market with great abundance of renewable energy, and a relatively declining supply of non-renewables. I generally trust my intuition, only I like to leverage it with a bit of chaos, every now and then, and so I ran some pseudo-random strings of values and I chose an output tensor made of TO = {QR/N = 0,95; QNR/N = 0,48}.

That state of output is supposed to be somehow logically connected to the state of input. I imagined a market, where the relative abundance in the consumption of, respectively, renewable energies and non-renewable ones is mostly driven by growing demand for the former, and a declining demand for the latter. Thus, I imagined relatively high a small-user price for renewable energy and a large fork between that PA;R and the PB;R. As for non-renewables, the fork in prices is more restrained (than in the market of renewables), and its top value is relatively lower. The non-renewable power installations are almost fed up with investment INR, whilst the renewables could still do with more capital IR in productive assets. The LCOENR of non-renewables is relatively high, although not very: yes, you need to pay for the fuel itself, but you have economies of scale. As for the LCOER for renewables, it is pretty low, which actually reflects the present situation in the market.

The last part of my input tensor regards the crowdfunded capital K. I assumed two different, initial situations. Firstly, it is virtually no crowdfunding, thus a very low K. Secondly, some crowdfunding is already alive and kicking, and it is sort of slightly above the half of what people expect in the industry.

Once again, I applied those qualitative assumptions to a set of pseudo-random values between 0 and 1. Here comes the result, in the table below.

 

Table 1 – The initial values for learning in the perceptron

Tensor Variable The Market with virtually no crowdfunding   The Market with significant crowdfunding
Input TI LCOER         0,26           0,26
LCOENR         0,48           0,48
KR         0,01   <= !! =>         0,56    
KNR         0,01            0,52    
IR         0,46           0,46
INR         0,99           0,99
PA;R         0,71           0,71
PA;NR         0,46           0,46
PB;R         0,20           0,20
PB;NR         0,37           0,37
Output TO QR/N         0,95           0,95
QNR/N         0,48           0,48

 

The way the perceptron works means that it generates and feeds back local errors in each round of experimentation. Logically, over the 5000 rounds of experimentation, each input variable gathers those local errors, like a snowball rolling downhill. I take the values of input variables from the last, i.e. the 5000th round: they have the initial values, from the table above, and, on the top of them, there is cumulative error from the 5000 experiments. How to standardize them, so as to make them comparable with the initial ones? I observe: all those final output values have the same cumulative error in them, across all the TI input tensor. I choose a simple method for standardization. As the initial values were standardized over the interval between 0 and 1, I standardize the outcoming values over the interval 0 ≥ x ≥ (1 + cumulative error).

I observe the unfolding of cumulative error along the path of learning, made of 5000 steps. There is a peculiarity in each of the neural functions used: the sigmoid, and the hyperbolic tangent. The sigmoid learns in a slightly Hitchcockian way. Initially, local errors just rocket up. It is as if that sigmoid was initially yelling: ‘F******k! What a ride!’. Then, the value of errors drops very sharply, down to something akin to a vanishing tremor, and starts hovering lazily over some implicit asymptote. Hyperbolic tangent learns differently. It seems to do all it can to minimize local errors whenever it is possible. Obviously, it is not always possible. Every now and then, that hyperbolic tangent produces an explosively high value of local error, like a sudden earthquake, just to go back into forced calm right after. You can observe those two radically different ways of learning in the two graphs below.

Two ways of learning – the sigmoidal one and the hyper-tangential one – bring interestingly different results, just as differentiated are the results of learning depending on the initial assumptions as for crowdfunded capital K. Tables 2 – 5, further below, list the results I got. A bit of additional explanation will not hurt. For every version of learning, i.e. sigmoid vs hyperbolic tangent, and K = 0,01 vs K ≈ 0,5, I ran 5 instances of 5000 rounds of learning in my perceptron. This is the meaning of the word ‘Instance’ in those tables. One instance is like a tensor of learning: one happening of 5000 consecutive experiments. The values of output variables remain constant all the time: TO = {QR/N = 0,95; QNR/N = 0,48}. The perceptron sweats in order to come up with some interesting combination of input variables, given this precise tensor of output.

 

Table 2 – Outcomes of learning with the sigmoid, no initial crowdfunding

 

The learnt values of input variables after 5000 rounds of learning
Learning with the sigmoid, no initial crowdfunding
Instance 1 Instance 2 Instance 3 Instance 4 Instance 5
cumulative error 2,11 2,11 2,09 2,12 2,16
LCOER 0,7617 0,7614 0,7678 0,7599 0,7515
LCOENR 0,8340 0,8337 0,8406 0,8321 0,8228
KR 0,6820 0,6817 0,6875 0,6804 0,6729
KNR 0,6820 0,6817 0,6875 0,6804 0,6729
IR 0,8266 0,8262 0,8332 0,8246 0,8155
INR 0,9966 0,9962 1,0045 0,9943 0,9832
PA;R 0,9062 0,9058 0,9134 0,9041 0,8940
PA;NR 0,8266 0,8263 0,8332 0,8247 0,8155
PB;R 0,7443 0,7440 0,7502 0,7425 0,7343
PB;NR 0,7981 0,7977 0,8044 0,7962 0,7873

 

 

Table 3 – Outcomes of learning with the sigmoid, with substantial initial crowdfunding

 

The learnt values of input variables after 5000 rounds of learning
Learning with the sigmoid, substantial initial crowdfunding
Instance 1 Instance 2 Instance 3 Instance 4 Instance 5
cumulative error 1,98 2,01 2,07 2,03 1,96
LCOER 0,7511 0,7536 0,7579 0,7554 0,7494
LCOENR 0,8267 0,8284 0,8314 0,8296 0,8255
KR 0,8514 0,8529 0,8555 0,8540 0,8504
KNR 0,8380 0,8396 0,8424 0,8407 0,8369
IR 0,8189 0,8207 0,8238 0,8220 0,8177
INR 0,9965 0,9965 0,9966 0,9965 0,9965
PA;R 0,9020 0,9030 0,9047 0,9037 0,9014
PA;NR 0,8189 0,8208 0,8239 0,8220 0,8177
PB;R 0,7329 0,7356 0,7402 0,7375 0,7311
PB;NR 0,7891 0,7913 0,7949 0,7927 0,7877

 

 

 

 

 

Table 4 – Outcomes of learning with the hyperbolic tangent, no initial crowdfunding

 

The learnt values of input variables after 5000 rounds of learning
Learning with the hyperbolic tangent, no initial crowdfunding
Instance 1 Instance 2 Instance 3 Instance 4 Instance 5
cumulative error 1,1 1,27 0,69 0,77 0,88
LCOER 0,6470 0,6735 0,5599 0,5805 0,6062
LCOENR 0,7541 0,7726 0,6934 0,7078 0,7257
KR 0,5290 0,5644 0,4127 0,4403 0,4746
KNR 0,5290 0,5644 0,4127 0,4403 0,4746
IR 0,7431 0,7624 0,6797 0,6947 0,7134
INR 0,9950 0,9954 0,9938 0,9941 0,9944
PA;R 0,8611 0,8715 0,8267 0,8349 0,8450
PA;NR 0,7432 0,7625 0,6798 0,6948 0,7135
PB;R 0,6212 0,6497 0,5277 0,5499 0,5774
PB;NR 0,7009 0,7234 0,6271 0,6446 0,6663

 

 

Table 5 – Outcomes of learning with the hyperbolic tangent, substantial initial crowdfunding

 

The learnt values of input variables after 5000 rounds of learning
Learning with the hyperbolic tangent, substantial initial crowdfunding
Instance 1 Instance 2 Instance 3 Instance 4 Instance 5
cumulative error -0,33 0,2 -0,06 0,98 -0,25
LCOER (0,1089) 0,3800 0,2100 0,6245 0,0110
LCOENR 0,2276 0,5681 0,4497 0,7384 0,3111
KR 0,3381 0,6299 0,5284 0,7758 0,4096
KNR 0,2780 0,5963 0,4856 0,7555 0,3560
IR 0,1930 0,5488 0,4251 0,7267 0,2802
INR 0,9843 0,9912 0,9888 0,9947 0,9860
PA;R 0,5635 0,7559 0,6890 0,8522 0,6107
PA;NR 0,1933 0,5489 0,4252 0,7268 0,2804
PB;R (0,1899) 0,3347 0,1522 0,5971 (0,0613)
PB;NR 0,0604 0,4747 0,3306 0,6818 0,1620

 

The cumulative error, the first numerical line in each table, is something like memory. It is a numerical expression of how much experience has the perceptron accumulated in the given instance of learning. Generally, the sigmoid neural function accumulates more memory, as compared to the hyper-tangential one. Interesting. The way of processing information affects the amount of experiential data stored in the process. If you use the links I gave earlier, you will see different logical structures in those two functions. The sigmoid generally smoothes out anything it receives as input. It puts the incoming, compound data in the negative exponent of the Euler’s constant e = 2,72, and then it puts the resulting value as part of the denominator of 1. The sigmoid is like a bumper: it absorbs shocks. The hyperbolic tangent is different. It sort of exposes small discrepancies in input. In human terms, the hyper-tangential function is more vigilant than the sigmoid. As it can be observed in this precise case, absorbing shocks leads to more accumulated experience than vigilantly reacting to observable change.

The difference in cumulative error, observable in the sigmoid-based perceptron vs that based on hyperbolic tangent is particularly sharp in the case of a market with substantial initial crowdfunding K. In 3 instances on 5, in that scenario, the hyper-tangential perceptron yields a negative cumulative error. It can be interpreted as the removal of some memory, implicitly contained in the initial values of input variables. When the initial K is assumed to be 0,01, the difference in accumulated memory, observable between the two neural functions, significantly shrinks. It looks as if K ≥ 0,5 was some kind of disturbance that the vigilant hyperbolic tangent attempts to eliminate. That impression of disturbance created by K ≥ 0,5 is even reinforced as I synthetically compare all the four sets of outcomes, i.e. tables 2 – 5. The case of learning with the hyperbolic tangent, and with substantial initial crowdfunding looks radically different from everything else. The discrepancy between alternative instances seems to be the greatest in this case, and the incidentally negative values in the input tensor suggest some kind of deep shakeoff. Negative prices and/or negative costs mean that someone external is paying for the ride, probably the taxpayers, in the form of some fiscal stimulation.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Aware of how we generalize

 

I wonder whether I can develop sort of a general pattern on the basis of the case studies I presented in my recent updates: « Let’s Netflix a bit », « Brique par brique », and « Dans la tête d’un non-éléphant ». I mean, what I can do, as a social scientist, in a cognitive sequence that starts with finding the key metrics for the situation, in order to discover anything, then unfolds into finding the sources of information on the actual values of those metrics, just to use that information to identify the key resources, the core processes, and the fundamental ethical values of the social pattern studied.

Key metrics are observable, empirical variables, which I can use to assess the situation in a social context. Finding those key metrics and nailing down their actual values is the essence of what can be deemed as ‘economic method’. This is very largely the essential discovery that Adam Smith made: social systems can be observed mathematically, as sets of equations. Thus, the first step in that method I am unfolding in front of myself, and in front of you, my readers, is to find the key numbers in my social environment. How many people are there in my immediate social circle? With how many of them I should interact daily in order to build for myself a position in the local hierarchy?

Yes, I know, it sounds a bit artificial. People don’t intuitively think like this. I know I intuitively don’t think like this. First conclusion: this method I am unfolding is largely made into formalized research, not really the first cognitive reflex in a new social situation. I think that the other branch of the same path, which I have just published in French, in that update entitled « Dans la tête d’un non-éléphant », is a bit more intuitive. It spells: find the key rules of conduct in your social environment, try to nail down their alternative formulations, and find the meta-rules that serve to select the actual rules of conduct among all the available alternatives. In other words, figure out the game which is being played, get the hang of its rules, and then you have better grounds for enquiring about the numbers.

Good, let’s practice. I start exercising with the topic of my current research: renewable energies and my EneFin concept, that quasi-cooperative scheme where small consumers of energy buy, in the form of complex contracts, both energy and capital shares in the local suppliers of that energy. See, for example, that update entitled ‘The Tribal Equilibrium of the Joule’, in order to have a relatively fresh idea of that concept. When I step, as a newcomer, into any local market of energy, how can I identify the basic rules of the game that is being played in the whereabouts?

As it regards energy, the basic game is about how much energy do I need to occupy a given place in the local social hierarchy, and how much do I have to pay for that amount of energy? As you can notice, I do not really care, as a social Robinson Crusoe, about the natural environment. Yes, it sounds and looks primitive and short-sighted. Still, as I am trying to deconstruct honestly the course of social discovery, this is what I observe in my own thinking as for the market of energy: reference to natural environment and its well-being comes only secondarily, after I have put in place my essential bearings in the social reality strictly spoken.

Anyway, in this particular case – the market of energy – the rules of the game I am playing are very much quantitative. They are prices and quantities, essentially, but not exclusively. The contracts habitually practiced in that market come immediately after, or even ex aequo with prices and quantities. Contracts give an idea of the market power that individual market players can really deploy when negotiating the modalities of their mutual transactions.

If I had to present this path of discovery in a teachable form, like ‘Getting to know a local market of energy, in five easy steps’, what would it look like? Lesson #1 would probably start with a general advice: take some statistics about the local market of energy, for example from the website of the International Energy Agency, or from the World Bank, and check how much energy you are likely to consume per your own capita. Yes, that data is in kilograms of oil equivalent or in tons thereof, and your energy bill will be most likely in kilowatt hours, and thus it is useful to remember: 1 kg of oil equivalent = 11,63 kWh. Try to think, how much energy, above the strictly personal use, does a person need, in this particular market, when they want to start a small business, or when they want to turn from an individual into an organisation?

Lesson #2: get to know the prices of energy in your local market. Is there any reliable source of information in this respect, or do you have to sign, first, a contract with the local supplier of energy, and buy some, and receive a bill, in order to know the actual price? The transparency of pricing is an important institutional trait in energy markets, especially as it comes to the relative market power in small users, like households or really small businesses. As – or if – you become informed about the prices of energy, you can calculate the typical budget spent on energy, or simply the average annual energy bill, in typical social actors.

Lesson #3: get to know the typical contracts, in that local market of energy. First of all, is there any source of information about the contents of a typical contract for the supply of energy, or, as it is sometimes, and sadly, the case for the prices of energy, do you have to sign the contract first, and only then you are entitled to receive all those appendixes in small print, which fully explain what you have just signed? Yes, I know some of you can laugh, at this point, but I remember signing my first contract for the supply of electricity, for my first fully owned apartment in Poland, back in 1992. I had to sign a summary form, which essentially stated that I agree to the terms which will be delivered to me in written form once I sign that particular form. Kafka, you say? Yes, happens sometimes.

Anyway, in that lesson #3, the interesting path to follow in your own discovery is to observe the diversity of contracts. I am connecting, here, to my last update in French, entitled « Dans la tête d’un non-éléphant ». In this particular phase of research, it is interesting to discover how many different and clearly distinct contractual patterns are there in the given local market. Is it a ‘mono-contract’ environment, or is there some flexibility? The former suggests a typical market structure from textbooks on microeconomics: monopoly or oligopoly. The latter suggests something more competitive.

Basically, lessons #1 – #3 should tell us what room for institutional innovation is there in this precise market, i.e. what are the odds that a new institutional scheme will work and gain participants.

Good, lesson #4. Once we know the quantities, the prices, and the contracts, it is time to try something practical: a business concept. Not even a fully blown business plan, just a business concept. As you see that local market, can you think of a new, promising business? Logically, what you supply in the market of energy is, well, energy. There is not much room for product innovation in that respect. Still, as you think of it, what we consciously purchase is not the strictly spoken energy, as we do not decide about each individual electron flowing through the plug, but rather the access to energy. You can think about many different forms of that access.

A quick idea, just like that. Imagine a city with many, publicly available charging points for electronic devices. At some of them you can pedal to generate electricity, but just at some. Imagine that you have something like a unique login ID, or codename, which you use to plug your electronics into those publicly available sockets. Every time you use that form of energy outside your household (or the headquarters of your company), the corresponding intake of kilowatt hours charges your account. That would be a market of energy, where consumption is as individualized as technologically possible.

In that lesson #4, you can play with assessing this business concept. What are the odds that it catches on anywhere on Earth? What is the SWOT map, i.e. what are the required competitive strengths, the weaknesses to avoid, as well as opportunities and threats generated by the market?

I have that intuition that you reach the summit of scientific understanding about anything when you can design and control an experiment pertaining to that anything. This is the path to follow in your lesson #5 about the market of energy. Design and control an experiment, related or unrelated to the business concept from lesson #4. How can people experiment with energy? What types of behaviour are important to observe experimentally? How can you achieve, in your experimental environment, the usual attributes of a good experiment: isolation of precise phenomena, acceleration of their occurrence (as compared to real life), observability?

Why do I put experimentation in the last lesson? This is an old principle known to all engineers: if you can experiment with something, and survive, and have some fun, and, on the top of all that, have some new knowledge, it means you’ve got the hang of the thing.

I am taking on another particular, the teaching of management, a teaching I deliver to the 1st year Undergraduate students. If, hypothetically, I try to manage any type of organisational structure, from any hierarchical position that allows any management whatsoever, what are my first steps into an unknown territory? How can I know the rules of the game and which rules are a priority to figure out? Intuitively, I would look for the things that hold the surrounding organisation together. Are those people working together, although, let’s face it, they sometimes hate each other, because they refer and report to a common leader, or rather because they have common goals?

Thus, my lesson #1 in management would consist in observing patterns of behaviour in people around me. What exactly do they do together? How do they cooperate? How do they compete against each other? It is important, in that first lesson, out of the five (allegedly) easy steps, to observe rather than speculate. Just find patterns in human behaviour. The easiest way to do it is sequencing. Any pattern, in any part of observable reality, is a sequence of events. As you observe human behaviour around you, look for recurrent sequences. There are bound to be some. Mr A holds a meeting, every three of four days, with persons B, C, D, and E. The meeting usually lasts about one hour. The person D is usually pissed off, after those meetings.

Another one; when a customer complains about poor quality of the product, those complaints usually trigger a row. Who is arguing with whom?

Lesson #2 means jumping to another source of information: financial statements. Here, a remark. I know many people have a profound disgust of numbers and mathematics, usually because of shitty teaching thereof at the level of elementary school. Still, the outcomes of shitty teaching can be reverted, simply by triggering our own curiosity into action. The financials of an organisation are like the health metrics of a human. If you want to know somebody’s health, you need to understand the meaning of numbers like pulse, body temperature, the average length of sleep time during one night etc. Same thing with financials. They are pertinent metrics of an organism, period.

So, you go to those financials, and you take all of them, like the balance sheet, the income statement, the cash flow statement, and you simply look for the greatest numerical values. You figure out what is sticking out, quite simply. You select the categories attached to those numbers, and you connect them, as if you were connecting the dots in one of those graphical quizzes. This is an almost painfully basic, practical application of the scientific principle known as ‘the Ockham’s razor’. The principle states that the most obvious answer is usually the right one, where the most obvious means the one which requires the least assumptions. In this case, the greatest financial values are supposed to be the most important.

You can also get more sophisticated, during lesson #2, and take financial statements from two distinct periods, in the same organisation. You match the financial categories from two periods, and you calculate the relative magnitude of change, like value from the period T1 (later), divided by the value observed in the same category in period T0 (earlier). If you move along this tangent, you will pay attention to those categories, where the relative magnitude of change x(T1)/x(T0) is the greatest, in plus or in minus.

Lesson #2 teaches you basic empirical observation of quantitative variables, and now, in lesson #3, you are going to combine those empirical observations with the patterned human behaviour from lesson #1. Whatever type of measurement you chose in lesson #2 – the greatest absolute financial values or those displaying the greatest magnitude of change – in lesson #3 you assume that people do things about money. The patterns of behaviour you nailed down in lesson #1, they have a function, and that function is most likely connected to those big, or those quickly changing, financial amounts you observed in lesson #2. In lesson #3, therefore, you are pinning down the actual strategy – or strategies – in the organisation you are studying.

Here, one important distinction is due. The commonly used definitions of strategy, in management science, usually refer to the goals of the organisation, and the tasks planned in order to achieve those goals. Me, in my own little scientific garden, I cultivate the beautiful, behavioural flower of no-bullshit. I deeply agree with Bernard Bosanquet who used to say that it is bloody hard to know for sure what people want, and it is much more sensible to watch what they do. I also cherish John Nash’s point of view, namely that a strategy needs to have reasonably proven payoffs if it is to be seriously used in the future. To me, a strategy is a recurrently repeated pattern of action, with recurrently occurring results. A strategy can be something that people – or organisations – do even without being aware of doing it.

Anyway, in lesson #3, you define those connections between money and behaviour, as the typical strategies in the given organisation. Time has come for lesson #4, the lesson of what-if, the lesson of change. You know what people usually do in an organisation, you know what they are after, in terms of financial payoffs, and now you can imagine what will happen to this organisation if some of those parameters change. For example, what kind of change will this business – if this is a business, of course – undergo if they have the opportunity to attract an extra 40% of equity? (i.e. an addition of equity capital equal to 40% of what they already have as equity; search for the definition of ‘equity’, just to make sure you know what I am talking about). What would happen if they had to cut their equity down by 40%? What kind of strategies would they apply if there is a new opening, in their target market, which allows to pump their gross margin up by 20%, through higher prices? What if a new tax cuts their gross margin down by 20%?

Time for lesson #5, which is of the same kind as lesson #5 about the market of energy: design and control an experiment. Take the organisation you have studied in lessons #1 – #4. You can use the hypothetical changes you traced in lesson #4, or something else that comes to your mind as intriguing, like what-happens-if-I-press-this-button-oops-I-am-sorry but now transform those paths of change into experimental sequences. You give people some input – a task, a piece of information etc. – and you design a detailed sequence of how they should be responding to that input. You design that sequence so as the response, observed in the participants of your experiment, brings you the most valuable information possible.

You know what? I start liking that approach ‘learn Whatever The Hell You Want in 5 Lessons’. I know, I know: liking my own ideas is a slippery path. It is easy to misstep and fall into the abyss of hypocrisy. Still, I like the thing. Those five lessons about the market of energy seem to cover pretty much the basics of Microeconomics, one of my main teaching curriculums, and so, having covered microeconomics and management, I attempt a graceful jump towards another of my teaching paths, that of Political Systems and Economic Policy.

In order to make my jump look more graceful, i.e. in order to mask the possible awkwardness of my movements, I am doing something I like doing: I revert. I like reverting. This time, when teaching something about Political Systems, I will start, in lesson #1, by asking my students to design an experiment. Yes, this time, they start at the point where the students of management would be asked to finish. Let’s take a practical case: the constitution of The United Republic of Tanzania. The one from 1977.

Click this link, download the constitution and ask yourself the following question: how could you possibly stress-test the system? I mean, where can you see the weakest spots in the constitutional order? What sort of phenomena can hypothetically turn this order into disorder, and into what kind of disorder? At this stage, as this is your lesson #1, you can advance pretty intuitively. I am giving an example. In Part II, Article (47), points (1) and (2) of this constitution you can find the following rule: « 47.-(1) There shall be a Vice-President, who shall be the principal assistant to the President in respect of all the matters in the United Republic generally and, in particular shall assist the President in making a follow-up on the day-to-day implementation of Union Matters, perform all duties assigned to him by the President, and perform all duties and functions of the office of President when the President is out of office or out of the country. Without prejudice to the provisions of Article 37(5), the Vice-President shall be elected in the same election together with the President, after being nominated by his party at the same time as the Presidential candidate and being voted for together on the same ticket. When the Presidential candidate is elected the Vice-President shall have been elected. »

Now, imagine that for some reasons, the Vice-President has not been elected, or has been elected but he or she has resigned right after having been elected, and there is no one willing to take the office. In short, no Vice-President. What happens to the political system of Tanzania in such a case? Is it like that block of domino, which, once knocked down, drags the entire constitutional order into deep s**t (spell, as usually, s-asterisk-asterisk-t)? Or, maybe it is just a minor inconvenience?

Take another constitution, that of Australia. Do the same scanning as for this particular case. Look for really soft spots in the system: the institutions, political actors, or mutual checks of power between political actors, which, once disabled or out of control, can knock the whole system out of balance. The question is quite important, by the way. The Australians have the tenth Prime Minister appointed, over the last 10 years. This is a lot of change. Some kind of deep imbalance might be at work. Maybe you can put your finger on it?

Time for lesson #2: generalize the experiments from lesson #1. Take the same countries, those from lesson #1, Tanzania and Australia in the occurrence, and try to sketch the alternative avenues their respective political systems could possibly take from the present moment, into the future. Like three alternative paths of change for each country.

Lesson #3: generalize the observable idiosyncrasies from lessons #1 and #2. What structural (i.e. durable) differences can you notice between the two cases, Tanzania and Australia? What sort of difference between them can you pin down, as for the relative solidity of their constitutional orders, as well as regarding their possible paths of change? How would you describe the unique features observable in each of those political systems?

Lesson #4: figure out the rules of the game. If you had to give a piece of advice to your friend, like how to make a political career in Tanzania, what would you recommend? What does it mean to make political career in Tanzania? What are the most likely stages and pit stops? How long could it take to make the career in question? What strategies should your friend use to cover that path?

Move your (imaginary?) friend to Australia and try to repeat the process of designing their career path in politics. How is it different from Tanzania?

Lesson #5: nail down general metrics for political systems. Sum up your experience from lessons #1 – #4. Now, imagine that somebody asks you: ‘What are the most important facts and numbers to look upon if we want to understand how a given political system works? Which stones should we lift and turn in order to discover the fundamental mechanics of a political system?’. Now, I know that you might feel slightly ill at ease at this point. How can I make general rules on the grounds of two case studies? Well, firstly, this is how science works: brick by f***ing brick, you build that house. You observe one thing, you observe another thing, and you draw your conclusions even if you are not aware of drawing them. That whole piece of intellectual gymnastics, in 5 lessons, serves to make you aware of how you generalize.

Besides, as it comes to political systems, you do not have like a huge sample of cases; it is barely 150 more or less observable entities on the entire planet.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

The Tribal Equilibrium of the Joule

 

This update has been ripening for quite a few days. Things have been happening. Things usually do, mind you. What else could they do, if not happening? Anyway, the most important experience I went through during those last days was watching that Cannes-prized movie by Pawel Pawlikowski: ‘Cold War’ (PL: ‘Zimna wojna’). It was the first movie ever when I could see people like nailed to their seats and watching the final subtitles scroll through the screen. Everybody was sitting still, until a man from the staff came and asked us to leave.

Man, it stirred a lot inside my head! I know that foreign audiences see that movie mostly as sort of a love story, served in unusual packaging. Yet, for me, and, as far as I know, quite a lot of Polish people from my generation, it is much more. It is a story about existential choices that we used to make. When I was 13, in November 1981, my mother took me and basically escaped from Poland, just 15 days before the introduction of Martial Law. Then, I spent a few years living in France. Touch and go, you would say. Very much so, indeed.

Still, as I contemplate the life of the main male character in ‘Cold War’, I have very much the same recollection of that strange feeling, sort of being emotionally astride between Poland and France. It is as if you had just half of your emotional energy at your disposal, whilst the other half is like frozen. Bloody unpleasant, at the end of the day. A bit like depression (which I experienced too, many years later), but not quite the same. Gives you an impression of existential lightness, but you know, lightness is the absence of weight, and weight is what holds you in place, and what allows you to bounce up from a solid support. No weight, no bouncing up.

Years later, as a young adult, I decided to return to Poland, where I have been living ever since. Right now, as I observe a mounting wave of nationalistic fanaticism among my compatriots, and as I see them cultivating the same patterns of behaviour I remember from the communist Poland of the 1970ies, I feel like shaking them, especially the young ones, and saying: ‘Look, I have already been down there, down that deep, dark asshole, where you are heading right now, singing patriotic songs. Really, people, there is nothing glorious down that avenue’. Another voice inside of me says: ‘Look, man. There is no use in shaking anybody. They want to go all the way down to the bottom of the shithole? Let them go and get away from this cursed country! There are so many other places to live, nice and cosy’.

Still, here comes the first lesson I start coining up after having watched ‘Cold War’: once you start running away, it is bloody hard to stop. You are very likely to keep running for ever. Here comes the lesson I have learnt scientifically, over those last years: the catalogue of nice and cosy places to live, in the world, is shrinking on a daily basis. If I am to go anywhere, it is just to meet a different set of challenges. In Poland, we have that saying: in life, there ain’t soft game. There is not much point in looking for any. Life is tough, ‘cause it is meant to be.

The second lesson, out of that movie, is the following: in the communist Poland, we were deeply and utterly f***ed. It seemed so hopeless that emigration was a much better prospect than staying. Still, we moved on. As I see my country today, we have made quite a stroll, out of that communist shithole. My dream is to devise some sort of educational package, in social sciences, which could help the young generation to understand the mechanics of social change.

I got some inspiration, here. Mind you, this is also, partly, the fault of John Nash and John Rawls, as you can hint it with my last update in French, the one entitled « La situation est tellement nouvelle – hommage à John Nash ». What starts taking shape in my mind is a logic in three facets: ethical values <> rules of the game <> economic equilibrium. Why this? Firstly, and most importantly, I am deeply convinced that social sciences have a practical purpose, and this purpose is different from the theoretical one. Just like physics express themselves in a construction crane or in a wind turbine, just as biology and chemistry find their expression in a good diet or in a cure for a dangerous disease, social sciences need a practical application.

In Poland, as in many other places, I can observe a mounting wave of emotionally heated, quasi-tribal conflicts marked by an astounding amount of hate. I think I understand what is happening to us, humans. There is more of us on this planet than there has ever been, and even according to my most optimistic analyses, we are like 13% overshot as regards the available food and energy. With more and more of us around, there is more and more social interactions. More interactions mean faster learning and more experimentation, in a resource-constrained environment.

What can a civilisation be experimenting with? I have that simple social theory: any social structure is made of group identity, on the one hand, and of the definition of individual social roles, on the other hand. Out of those two, group identity is sort of more primal. This is how homo sapiens survived when they were barely sapiens enough to make a fire: by ganging up together. Social roles come as second, they are the fancy part, being defined once the most pressing urge for survival is under control. Still, as the founding father of my discipline, Adam Smith, used to claim, it is precisely the definition of social roles which gives a real developmental kick to any society. Moreover, the science of microeconomics proves – mostly indirectly, mind you, and this is the kind of flaw I would like to remediate – that the way people define and endorse social roles is significantly correlated with the way that markets work, which, in turn, translates into welfare, as well as into the ability to produce purposeful social change.

As a civilisation, we are intensely experimenting with the more primal of the two social dimensions, namely with group identity, and we leave social roles for dessert. In a more and more densely populated world, group identity is almost inevitably defined by opposition rather than by congregation. I mean, in a desert, a social group is made of people who simply are around. In a city, a social group is mostly made of people who define themselves as different from all those other f***ers surrounding them. I a densely populated world, you define a social group by defining common enemies.

This is where my own intellectual stance comes. With the presently observable level of social tensions, it is important to communicate – and educate – that we can be really better off if we give up auctioning group identities built on hate, and focus on creative experimenting with social roles.

The whole challenge consists in that ‘show to people’ part. I am an academic, and I know how appallingly boring can science be when presented in the wrong way. If I am to show other people, i.e. my students, how can they define social roles, it is interesting to start with my own social role. Once again, I am stressing the difference between social role and group identity. If I say ‘I am agnostic’ (which I am, by the way) or ‘I am Polish’ (yep, this is me, too) I am conscribing to a group of people, i.e. agnostics and Poles, respectively. This is group identity. On the other hand, my social role is made of what I do, not the category I identify myself with.

There is an important remark I feel like dropping into the conversation, right now. There are things I think, and the things I say, and the things I say I think I do, and, finally, the things I really do. My ambition, in the understanding of social roles in general, and my social role in particular, is to get to that last category, i.e. to define social roles in terms of what people really do. I fully acknowledge the importance of communicating about what we do. In some cases, like (re)running for the presidency of a country, it can be quite important to talk a lot about the things a person has done when in office. Still, I have that what we really actually do matters more than what we say we do.

We do things because we know how to do them. Human behaviour is based on incremental learning of what is possible here and now, as opposed to what is possible under the condition of mastering new skills, and, as still another distinction, opposed to what seems utterly impossible. The choices we make are always determined by what we currently know how to do. That skill-base is, in the same time, the base for framing our decisions.

You can consult Tversky & Kahneman (1992[1]) for a good theoretical review of this particular issue. Their review of the theory regarding human choices strongly suggests that we do things because we know how to do them. Human behaviour is based on incremental learning of what is possible here and now, as opposed to what is possible under the condition of mastering new skills, and, as still another distinction, opposed to what seems utterly impossible. The choices we make are always determined by what we currently know how to do. That skill-base is, in the same time, the base for framing our decisions.

What I do is, in other words, my behaviour. In that vast array of phenomena, I can make basic distinctions simply by assessing the frequency of occurrence. The most important part of what I do is what I do the most recurrently. Yes, I cut out things like breathing, brushing my teeth or having my coffee. Frequent, but not really looking pertinent to defining a social role. What I do every day, like socially meaningful things, consists, first of all, in being a husband and a father. This is generally a bliss, seriously, but on some (frequent) occasions, it is a job. You have targets, deadlines, and much less leisure time you would expect. Next, I do scientific or peri-scientific communication every day. I write things for my blog, I write the strictly spoken scientific content (articles, books), I do research on empirical data, I prepare educational materials, I review the literature, I teach etc. I don’t know if it’s important, but communicating in itself is important to me. Keeping a blog dramatically increased my output as scientific communicator. In other words, when I know I have an audience, I switch into a much more active mode of communication.

I can provisionally say that I am a family man, a researcher and a scientific communicator. If I had to visualise myself as a node in the network, I would imagine something like a small, semi-artisanal factory. I absorb input in the form of information, and emotional stimulation from other people. I produce output, which consists in transformed information, and support to my family and my students. As I produce transformed information, I try to be kind of original and persuasive: I try to distinguish myself. As I produce social support, I value being kind of solid and dependable. The kind of support I do is like ‘do-whatever-you-want-just-remember-I-am-here-when-you-need-me’.

From a different point of view, I can visualise myself in a hierarchy. I am prone to see myself as sort of lower-middle class, and I like developing new skills in order to improve my own perception of my hierarchical position. I had a few attempts at leadership, in my adult life, but it was just moderately successful. People whom I used to work with, on those occasions, would say that I am kind of cool, but with time I tend to cut ties with those I am supposed to lead. If you want, I tend to be too much of a leader in my own head, and not enough on the outside. On the other hand, when I took a personality test, a few months ago, it turned out I have strong predispositions to leadership. F**k, man! Strong predispositions to leadership, combined with just moderately successful experience in leadership, it makes a lot of strong predispositions basically staying idle, on their couch, in front of their TV. Sounds a bit dangerous. Such predispositions tend to do silly things. Revolutions, for example.

Once I have done this basic inventory of my behaviour, I can start asking embarrassing questions. Can I branch or switch into other types of behavioural sequences, and how can all that branching and switching alter my social role? What would be the consequences of such a change?

By branching, I mean producing mutations of the routines I repeat presently. It is evolution rather than revolution. I can do it, at the most elementary level, by changing frequencies, volumes, or intensities. In other words, I can do something that I have already been doing, just more or less frequently, more or less assiduously, or, finally, more or less intensely. This is sort of Undergraduate branching in my individual behavioural sequences. At the Graduate level, the Master’s one, I can like casually drop some new pieces of behaviour into my routine, and keeping them dropped in, like for many months in a row. On the other hand, switching means going into a sequence of completely different pieces of behaviour.

Good. Let’s branch. I have that internal curious ape inside of me, and apes know about branching. I mean, they certainly know about branches, but it is almost the same. Just look up the theory of graphs and you will see by yourself. How can the catalogue of behaviourally defined social roles meddle with economic equilibrium? In order to ponder this issue, I am returning to that idea of mine, which I have been ruminating for months: the EneFin concept. I already posited the hypothesis that that new institutional schemes in the market of energy can trigger positive social change. You can refer to « Which salesman am I? » from the 23rd of July.

I made a comparison of four countries – South Soudan, Sri Lanka, Poland, and Germany – with respect to the structurally apprehended consumption of energy, based on the data provided by the International Energy Agency (IEA). In the last year of time series provided by IEA, i.e. 2015, the populations of these countries were, respectively, 11 882 136, 20 966 000, 37 986 412, and 81 686 611 people. Now. you can have a glance at Table 1, below, and Table 2, further below.  In Table 1, you can see the final use of energy in those countries, in KTOE, or kilotons of oil equivalent, structured at the level of final consumption. Table 2, for the sake of complete presentation, shows the coefficients of energy use per capita, in KGOE, or kilograms of oil equivalent.

When studying numbers in those tables, it is good to keep in mind that essential truth: the human civilisation is all about the transformation of energy. No energy, no civilisation, sorry. As you compare the South Soudan with the remaining three countries, you can see there are things in those three, which are virtually inexistent in South Soudan: industry, transport, commercial and public services. More energy means a different structure of activity in a given society. A different structure of activity means, in turn, a different catalogue of social roles. New jobs, crafts, and skillsets appear.

New social roles mean that people develop new sets of goals. For example, in Germany or Poland, a young person can easily say ‘I want to be a bloody good bus driver’. The more energy in transport, the more chances that such a goal can be achieved. If I want something, it is important to me, and it is important to cultivate the skills required for achieving what I want. Things that are important to me, and to other people, are socially valuable outcomes and patterns of behaviour, thus they become ethical values. As brutally strange as it can look, the ethical values of a society are very likely to change as the consumption of energy changes. The more energy is being consumed in a given field, the more people can develop an ethical stance regarding outcomes and skillsets corresponding to that field.

Mind you, that paragraph above it has been free style thinking. I have written something, which, on the one hand, seems obvious to me, and still, on the other hand, as I am trying to fathom all the further-reaching consequences of that, my internal curious ape starts scratching its chin. Lots of branching, here.

Table 1 Final use of energy in South Soudan, Sri Lanka, Poland, and Germany, International Energy Agency, KTOE [kilotons of oil equivalent]

  South Sudan Sri Lanka Poland Germany
Industry  2  3 009  14 094  55 275
Transport  219  2 857  16 586  55 693
Other  201  4 042  29 934  87 934
Residential  168  3 470  18 836  53 122
Commercial and public services  19  444  7 810  34 691
Agriculture/forestry  13  –  3 287  –
Fishing  –  –  –  –
Non-specified  1  128  –  121
Non-energy use  –  31  5 540  21 266

Table 2 Final use of energy per capita, in South Soudan, Sri Lanka, Poland, and Germany, KGOE [kilograms of oil equivalent], author’s

  South Sudan Sri Lanka Poland Germany
Industry  0,0002  0,144  0,371  0,677
Transport  0,018  0,136  0,437  0,682
Other  0,017  0,193  0,788  1,076
Residential  0,014  0,166  0,496  0,650
Commercial and public services  0,002  0,021  0,206  0,425
Agriculture/forestry  0,001  –  0,087  –
Fishing  –  –  –  –
Non-specified  0,000  0,006  –  0,001
Non-energy use  –  0,001  0,146  0,260

I am finishing this update with a quick note based on the World Energy Investment Report 2018 by the International Energy Agency.  The report suggests that currently we are witnessing two types of important changes in the sector of renewable energies. On the one hand, the investment made and committed so far had allowed a sharp drop in the LCOE of renewable energies. A further fall in that LCOE can be predicted for the immediate future. On the other hand, the aggregate value of investment in energy in general, and in renewable energies in particular, is on a decreasing path as well. We have falling investment, and falling costs. These two phenomena combined show an economic landscape very similar to that known from the 19th century.

Still, in order to understand that phenomenon, it is to keep in mind that we are talking about net investment, after depreciation and divestment. For example, the big observable divestment in coal-fired power plants contributes to lower the overall net investment in the energy sector.

There is a noticeable switch from the relatively less capital – intensive investments in traditional thermal technologies, like coal-fired power plants, towards more capital – intensive projects connected to more advanced, renewable technologies. It looks almost as investors were purposefully looking for highly – capital intensive projects with a lot of experimentation, research and development on the way towards full operational capacity.

Have we overinvested in the sector of energy? Can institutional changes affect this state of things?

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Support this blog

€10.00

[1] Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5(4), 297-323.

The essential business concept seems to hold

 

And so I am going straight into writing the business plan for my EneFin project. It means that from now on, for like the next two or three weeks, updates to my blog will be basically pieces of business plan, or nearly. In my last update in French, I already started connecting the dots. I focused mostly on the market of renewable energies in Europe, in comparison to other regions of the world. You can look up « Fini de tourner autour du pot » for more details.

The EneFin project is a FinTech concept designed for the market of renewable energies, for the moment just in Europe, but I believe it can be transposed into other regions of the world. In that respect, one important thing is to keep in mind. For the purposes of this project, Europe is being defined as EU + Norway + Switzerland.

FinTech means finance, and so I take on studying the financial context. I want to identify basic patterns in that respect. I focus on two basic components of the financial market, i.e. on the supply of money, and that of credit. I take the two corresponding metrics from the resources published by the World Bank, i.e. the supply of broad money as % of the GDP, and the supply of credit from the domestic financial sector, once again as % of GDP. As they are both denominated in units of the GDP, I need that bugger too, and so I look it up, as given in constant 2010 US$.

The logic I am following here is that anything economic that happens, i.e. creation or consumption of utility, has a financial counterpart. Every hour worked, every machine installed in a factory etc. has a shadow sibling in the form of some money written on some accounts, which, in turn, has a shadow cousin in the form of credit written in the balance sheets of banks. Each gigawatt hour of renewable energy is supposed to be mirrored by monetary balances, and both of them, i.e. the gigawatt hour and its monetary shadow, are being mirrored by some lending and borrowing in banks.

I define five geographic regions, namely: a) Europe (EU + Switzerland + Norway) b) North America c) China d) Middle East & North Africa and e) Latin America & Caribbean. I consider China as representative for the emerging Asian economies. In each of these regions, I have already calculated the overall consumption of renewable energies in gigawatt hours. Now, I calculate the absolute supply of broad money, and that of credit, and then I compute two coefficients: broad money, and domestic credit, supplied per 1 GWh of renewable energy. In other words, I am assessing how big financial a shadow each such gigawatt hour has, across space and over time. I am trying to define the shape of that shadow, as well, by computing the difference between money supplied, and domestic credit provided, once again per 1 GWh of renewable energy consumed.

Three graphs below are portraying the results of those calculations. Further below, I am developing an interpretation of that data.

M per GWh

Domestic credit per GWh

M minus Credit per GWh

I’m back after graphs, with, hopefully, pertinent an interpretation. Here is the thing: credit goes after expected profits, whilst money goes after transactions, as well as after uncertainty as for what kind of resources should we invest in so as to have those profits. You need credit to finance a new windfarm, and you need to monetize this credit, i.e. to transform it into monetary balances that you hold on your bank account, when there is a lot of alternative technologies for your windfarm, and you are really in two (at least two) minds as for which one is the right one for you.

Financial aggregates are like ravens – you can learn a lot by observing the way they go. Europe, such as defined in my business plan, is definitely burning the rubber as for the amount of renewable energy consumed. Still, credit from Europe seems to be going mostly to Middle East and North Africa, and a bit to China, as well. This is funny, because it has been so for centuries. Anyway, if a lot of credit emigrates, it is obviously looking for new horizons in foreign markets. It seems to see better horizons in them foreign markets than home. Money, i.e. our response to uncertainty and hesitation, tends to decline a bit in Europe, whilst it is swelling in other regions. The diagnosis that Herr Doktor Wasniewski can formulate is the following: Europe has developed a lot of actual, current capacity for generating renewable energies, and yet this capacity seems to be somehow short-legged. Both the innovative input from new inventions, and the financial greasing with credit and monetary balances seem to be fading in the market of this proud, small, and cold continent. I think this is the Dark Side of the Force, and said Force is called ‘Fiscal Stimulation’. When you look at the development of renewables in Europe, they burgeon mostly in those countries, where the fiscal shoulder of the government strongly supports the corresponding investments. Upstream subsidies, and feed-in-tariffs downstream, it is all nice as long as we don’t realize one thing: strong, resilient social structures emerge as the outcome of struggle and fighting, not as the result of gentle patting on the shoulder.

In other words, the European market of renewable energies lacks efficient, market-based solutions, which could channel capital towards new technologies and their applications, and give a hand to fiscal instruments in that respect.  

It looks nice. I mean, I have just developed a coherent, economically well-grounded argument in favour of developing functionalities such as EneFin, and it didn’t hurt as much as I thought it would have.

Now, I change my optic, and I turn towards the financials of the EneFin project itself. I am starting from the point of breaking even, i.e. from the mutual balance between the gross margin generated on transactions with customers, and the fixed costs of the business. I need to figure out the probable amount of fixed costs. How to estimate fixed costs in a business structure that does not exist yet? The easiest way is business modelling. I take a business as similar as possible to what I want to develop, and I barefacedly copy what they do. The closest cousin to my project, which I can find and emulate is FinTech Group AG in Germany. In Table 1, below, you can see their selected financials.

 

Table 1 Selected financials of FinTech Group AG

Year Fixed costs in €000 Revenue €000 Share of fixed costs in revenue
2015 41 718 75 024 55,6%
2016 38 916 95 021 41,0%
estimate 2017* 45 020 99 124 45,4%
Average 41 885 89 723 46,7%

*this is an estimate based on mid-year results, i.e. semi-annual figures have been multiplied by two

This is pretty obvious that revenues reaching over 99 millions of euro annually will be, in the start-up phase, out of reach in the EneFin project. What counts the most are proportions. It looks like a FinTech company located in Europe needs some €0,47 of fixed costs for each €1 of revenue, in order to keep its business structure afloat. Still, fixed costs are fixed. I know, it sounds a bit tautological, but it is the essential property of fixed costs. In a given business model, i.e. in a bundle of processes that create and capture value added, we need a certain fixed structure to maintain those processes. Thus, now, I wonder what is the minimum size of a business structure in the FinTech business.

What do I do when I don’t know how to go about a piece of information I don’t know? I go Google, and I type: ‘what is the minimum size of a FinTech business?’. Ridiculous? Maybe, but efficient. My first hit is a fellow Word Press site, labelled ‘Venture Scanner’, and there, I find this article entitled Average Company Size Per FinTech Category, and Bob’s my uncle. The EneFin concept matches three categories mentioned there, i.e. Crowdfunding, Institutional Investment, and Small Business Tools, with respective headcounts of 38, 40, and 80 employees.

It is so easy. It should be illegal. I type ‘what is the average salary of an engineer in Europe?’. My first hit is also a blog, www.daxx.com and there, I find this piece of information entitled ‘IT Salaries: Which Is the Highest-Paying Country for a Software Developer?’. Looks like €60 000 a year is a reasonable average.

Now, I assume that among those typical sizes of organizations in FinTech, I aim for the relatively smaller, i.e. 40 people. Those 40 people are supposed to earn €60 000 each, on average, and that makes a total payroll of 40 * €60 000 = €2 400 000 a year. Good. Next step: the rent. Once again, Professor Google directs me onto the path of wisdom, to the website called ‘The Balance Small Business’, and there I find the calculator of work space necessary. Looks like it is some 18,6 m2 per engineer (the original article gives amounts in square feet, but you just multiply them by 0,093). Hence, I need, for my EneFin structure, like 18,6 * 40 = 744 m2 in terms of office space. I check a big business hub, Frankfurt, Germany, for rental prices. Looks like €20 a month per 1 m2 is a reasonable rate to expect for a relatively good location, which makes me 744 * €20 *12 = €178 560 a year.

Thus, the basic payroll plus the rental of office space makes €2 400 000 + €178 560 =  €2 578 560 a year, which I multiply by two in order to account for marketing and other administrative expenses. Now, some of you could ask, isn’t that multiplying by two a bit far-fetched? Well, what I can tell you for sure: at least some of those 40 people, maybe even most of them, will have to travel, and business trips, it costs insane amounts of money. Anyway, my rough guess of fixed costs for the core structure of EneFin is €5 157 120 a year, and this is sort of the first pit-stop on the path of development in that business. Based on what I found earlier, with that FinTech Group AG, I assume that €5 157 120 a year needs €5 157 120 / 0,467 =  €11 043 083,51 a year in terms of revenue.

A pit-stop, you reach it after having driven on that racing track for some time, and here, the time is like 4 years. I assume that cycle on the grounds of the case study I did regarding the business model of Square Inc. You can find the details in « The expected amount of what can happen ». My vision of EneFin in terms of products marketed is a 50/50 balance between transaction based-revenues, on the one hand, and those based on subscription, on the other hand. Therefore, I split the target revenue of €11 043 083,51 a year, to be reached in the third year, into two halves, or partial targets of €5 521 542 each. In other words, I am sketching a business model, which leads to developing, over 4 years, two business units inside the same business concept. One of those business units would be focused on developing a product based on transaction fees, the other one would target a subscription-based utility.

I am using the model cycles of growth I nailed down, with the help of Euclidean distance, in the analogous, i.e. transaction-based and subscription-based, business fields at Square Inc. I apply it to the target revenue of EneFin, as calculated and structured above. The results are shown in Table 2 below.

 

Table 2 First approach to revenues and operational margin at EneFin

Planned percentage of the target revenue Planned revenue in €
Year Subscription-based Transaction-based Subscription-based Transaction-based Operational profit after fixed costs of €5 157 120
1 10% 37% € 534 033 € 2 033 301 € (2 589 787)
2 47% 83% € 2 571 877 € 4 575 370 € 1 990 127
2 73% 91% € 4 046 709 € 5 048 456 € 3 938 045
4 100% 100% € 5 521 542 € 5 521 542 € 5 885 964

 

Now, the market. I made a practical (I hope!) approach from that angle in « The stubbornly recurrent LCOE ». Provisionally, I estimate the basic transaction fee collected by EneFin at 5%, although the fork of possible rates is really wide, ranging from fractions of a percentage point, practiced in the actual financial business, e.g. the 0,4% collected by brokerage houses on your transaction in the stock market, up to the nearly 20% apparently collected by of Square Inc in their transaction-based products.

Subscription-based products seem to be sort of better in the FinTech business, but you need to tempt your customers into paying a fixed subscription fee instead of a casual, transaction-based one. You tempt them crudely and primitively, by making the subscription-based fee more attractive financially if they do a large amount of transactions. The mathematical construct that I adopt to simulate this one is the following: if %T is the transaction-based fee, expressed as a percentage of the actual transactions, the subscription-based fee %S should be like %S = 0,5*%T. If %T = 5%, then %S should modestly stay at %S = 2,5%.

I take the target revenues from the 4th year of the simulated development cycle, as shown above, and I compute the value of energy, at retail market prices, that corresponds to those revenues. It makes, respectively, €220 861 670 of energy in the subscription-based market, and €110 430 835 in the transaction-based one, €331 292 505 in total. Now, I take that lump sum and I apply it to the national markets of selected European countries, at their local retail prices. I calculate the quantity of kilowatt hours that correspond, in each country, to those €331 292 505, and I express it as the percentage of the overall national market of energy for households. Additionally, I calculate the amount of capital that suppliers of energy can raise through the complex contracts of EneFin, where the fork between the retail price for households and that for non-household users is being invested into the balance sheet of the supplier. The results of this particular calculation are shown in Table 3, below.

 

Table 3

Country Price of electricity for households, per 1 kWh Non-household price of electricity, per 1 kWh Percentage of the national market of households, served by EneFin at target revenue Capital raised by local suppliers via EneFin at target revenue
Austria € 0,20 € 0,09 2,5% € 182 210 877,94
Switzerland € 0,19 € 0,10 3,5% € 152 445 943,44
Czech Republic € 0,14 € 0,07 2,9% € 165 646 252,68
Germany € 0,35 € 0,15 0,2% € 189 310 003,06
Spain € 0,23 € 0,11 0,6% € 172 848 263,66
Estonia € 0,12 € 0,09 25,0% € 82 823 126,34
Finland € 0,16 € 0,07 3,2% € 186 352 034,26
France € 0,17 € 0,10 0,4% € 136 414 561,03
United Kingdom € 0,18 € 0,13 0,5% € 92 025 695,93
Netherlands € 0,16 € 0,08 1,4% € 165 646 252,68
Norway € 0,17 € 0,07 3,2% € 194 877 944,33
Poland € 0,15 € 0,09 1,2% € 132 517 002,14
Portugal € 0,23 € 0,12 3,2% € 158 444 241,69

 

Good. That business plan seems to be taking shape. EneFin seems to need just sort of a beachhead in most national markets of energy, in order to keep its head above the water. Of course, there is a lot of testing and retesting of numbers before I nail them down definitively, but the essential business concept seems to hold.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

 

Support this blog

€10.00

The stubbornly recurrent LCOE

 

I am thinking about those results I got in my last two research updates, namely in “The expected amount of what can happen”, and in “Contagion étonnement cohérente”. Each time, I found something intriguingly coherent in mathematical terms. In “The expected amount of what can happen”, I have probably nailed down some kind of cycle in business development, some 3 – 4 years, as regards the FinTech industry. In “Contagion étonnement cohérente”, on the other hand, I have seemingly identified a cycle of behavioural change in customers, like around 2 months, which allows to interpolate two distinct, predictive models as for the development of a market: the epidemic model based on a geometric-exponential function, and the classical model of absorption based on the normal distribution. That cycle of behavioural change looks like the time lap to put into an equation, where the number of customers is a function of time elapsed, like n(t) = e0,69*t.  Why ‘0,69’ in n(t) = e0,69*t? Well, the 0,69 fits nicely, when the exponential function n(t) = eß*tneeds to match a geometric process that duplicates the number of customers at every ‘t’ elapsed, like n(t) = 2*n(t-1) + 1.

I have identified those two cycles of change, thus, and they both look like cycles of behavioural change. It takes a FinTech business like 3+ years to pass from launching a product to stabilizing it, and it apparently takes the customers some 2 months to modify significantly their behaviour – or to take a distinctive, noticeable step in such behavioural change – regarding a new technology. I am trying to wrap my mind around each of those cycles separately, as well as around their mutual connection. It seems important for continuing to write that business plan of mine for the EneFinproject, that FinTech concept for the market of energy, where households and small businesses would buy their energy through futures contracts combined with participatory deeds in the balance sheet of the energy provider.

Now, before I go further, a little explanation for those of you, who might not quite grasp the way I run this blog. This is a research log in the most literal sense of the term. I write and publish as I think about things and as I channel my energy into the thinking. This blog is the living account of what I do, not a planned presentation. As for what I do the latter category, you can find it under the heading of “Your takeaways / Vos plats à emporter“. The approach I use, the one from the side of raw science on the make, is the reason why you can see me coming and going about ideas, and this is why I write in two languages: English and French. I found out that my thinking goes just sort of better when I alternate those two.

Anyway, I am trying to understand what I have discovered, I mean those two intriguing cycles of behavioural change, and I want to incorporate that understanding in the writing of my business plan for the EneFinproject. Cycle of change spells process: there is any point of talking about a cycle if it happens like recurrently, with one cycle following a previous cycle.

So, I do what I need to do, namely I am sketching the landscape. I am visualising urban networks composed of wind turbines with vertical axis, such as I started visualising in « Something to exploit subsequently». Each network has a different operator, who maintains a certain number of turbines scattered across the city. Let this city be Lisbon, Portugal, one of my favourite places in Europe, which, on the top of all its beauty, allows experiencing that shortest interval of time in the universe, i.e. the time elapsing between the traffic lights turning greed, for vehicles, and someone from among said vehicles hooting impatiently.

We are in Lisbon, and there are local operators of urban wind turbines, and with the wind speed being 4,47 m/s on average, each turbine, such as described in the patent application no. EP 3 214 303 A1, generates an electric power averaging 47,81 kilowatts. That makes 47,81 kilowatts * 8760 hours in the normal calendar year = 418 815,60 kilowatt hoursof energy a year. At €0,23 for each kWh at the basic price for households, in Portugal, the output of one turbine is worth like € 96 327,59. According to the basic scheme of EneFin, those € 96 327,59 further split themselves in two, and make:

€ 50 257,87in Futures contracts on energy, sold to households at the more advantageous rate of €0,12, normally reserved for the big institutional end users
€ 46 069,72in Participatory deeds in the balance sheet of the operator who currently owns the turbine

Thus, each local operator of those specific wind turbines has a basic business unit – one turbine – and the growth of business is measured at the pace of developing such consecutive units. Now, the transactional platform « EneFin» implants itself in this market, as a FinTech utility for managing financial flows between the local operators of those turbines, on the one hand, and the households willing to buy energy from those turbines and invest in their balance sheet. I assume, for the moment, that EneFin takes 5% of commissionon the trading of each complex contract. One turbine generates 5%*€ 96 327,59 =  € 4 816,38 of commission to EneFin.

I am progressively make the above converge with those cycles I have identified. In the first place, I take those two cycles I have identified, i.e. the ≈ 2 months of behavioural change in customers, and the ≈ 3+ years of business maturation. On the top of that, I take the simulations of absorption, as you can see in « Safely narrow down the apparent chaos». That means I take into account still another cycle, that of 7 years = 84 months for the absorption of innovation in the market of renewable energies. As I am having a look at the thing, I am going to start the checking with the last one. Thus, I take the percentages of the market, calculated « Safely narrow down the apparent chaos», and I apply them to the population of Lisbon, Portugal, i.e. 2 943 000 peopleas for the end of 2017.

The results of this particular step in my calculations are shown in Table 1 below. Before I go interpreting and transforming those numbers, further below the table, a few words of reminder and explanation for those among the readers, who might now have quite followed my previous updates on this blog. Variability of the population is the coefficient of proportion, calculated as the standard deviation divided by the mean, said mean being the average time an average customer needs in order to switch to a new technology. This average time, in the calculations I have made so far, is assumed to be 7 years = 84 months. The coefficient of variability reflects the relative heterogeneity of the population. The greater its value, the more differentiated are the observable patterns of behaviour. At v = 0,2it is like a beach, in summer, on the Mediterranean coast, or like North Korea, i.e. people behaving in very predictable, and very recurrent ways. At v = 2, it is more like a Halloween party: everybody tries to be original.

Table 1

Number of customers acquired in Lisbon
[a] [b] [c] [d]
Variability of the population 12th month 24th month 36th month
0,1 0 0 0
0,2 30 583 6 896
0,3 5 336 25 445 86 087
0,4 29 997 93 632 212 617
0,5 61 627 161 533 310 881
0,6 85 978 206 314 365 497
0,7 100 653 229 546 387 893
0,8 107 866 238 238 390 878
0,9 110 200 238 211 383 217
1 109 574 233 290 370 157
1,1 107 240 225 801 354 689
1,2 103 981 217 113 338 471
1,3 100 272 208 016 322 402
1,4 96 397 198 958 306 948
1,5 92 525 190 184 292 331
1,6 88 753 181 821 278 638
1,7 85 134 173 925 265 878
1,8 81 695 166 513 254 020
1,9 78 446 159 577 243 014
2 75 386 153 098 232 799

Now, I do two things to those numbers. Firstly, I try to make them kind of relative to incidences of epidemic contagion. Mathematically, it means referring to that geometric process, which duplicates the number of customers at every ‘t’ elapsed, like n(t) = 2*n(t-1) + 1, which is nicely (almost) matched by the exponential function n(t) = e0,69*t. So what I do now is to take the natural logarithm out of each number in columns [b] – [d]in Table 1, and I divide it by 0,69. This is how I get the ‘t’, or the number of temporal cycles in the exponential function n(t) = e0,69*tso as to obtain the same number as shown in Table 1. Then, I divide the time frames in the headings of those columns, thus, respectively, 12, 24, and 36, by the that number of temporal cycles. As a result, I get the length of one period of epidemic contagion between customers, expressed in months.

Good, let’s diagnose this epidemic contagion. Herr Doktor Wasniewski (this is me) has pinned down the numbers shown in Table 2 below. Something starts emerging, and I am telling you, I don’t really like it. I have enough emergent things, which I have no clue what they mean, on my hands. One more emergent phenomenon is one more pain in my intellectual ass. Anyway, what is emerging, is a pattern of decreasing velocity. When I take the numbers from Table 1, obtained with a classical model of absorption, and based on the normal distribution, those numbers require various paces of epidemic contagion in the behaviour of customers. In the beginning, the contagion need to be f***ing fast, like 0,7 ÷ 0,8 of a month, so some 21 – 24 days. Only in very homogenous populations, with variability sort of v = 0,2, it is a bit longer.

One thing: do not really pay attention to the row labelled ‘Variability of the population 0,1’. This is very homogenous a population, and I placed it here mostly for the sake of contrast. The values in brackets in this particular row of Table 2 are negative, which essentially suggests that if I want that few customers, I need going back in time.

So, I start with quite vivacious a contagion, something to put in the scenario of an American thriller, like ‘World War Z no. 23’. Subsequently, the velocity of contagion is supposed to curb down, to like 1,3 ÷ 1,4 months in the second year, and almost 2 months in the 3rdyear. It correlates surprisingly with that 3+ years cycle of getting some stance in the business, which I have very intuitively identified, using Euclidean distances, in «The expected amount of what can happen». I understand that as the pace of contagion between clients is to slow down, my marketing needs to be less and less aggressive, ergo my business gains in gravitas and respectability.

Table 2

The length of one temporal period « t » in the epidemic contagion n(t) = 2*n(t-1) + 1 ≈ e0,69*t, in the local market of Lisbon, Portugal
[a] [b] [c] [d]
Variability of the population 12th month 24th month 36th month
0,1  (0,34)  (1,26)  (6,55)
0,2  2,44  2,60  2,81
0,3  0,96  1,63  2,19
0,4  0,80  1,45  2,02
0,5  0,75  1,38  1,96
0,6  0,73  1,35  1,94
0,7  0,72  1,34  1,93
0,8  0,71  1,34  1,93
0,9  0,71  1,34  1,93
1  0,71  1,34  1,94
1,1  0,71  1,34  1,94
1,2  0,72  1,35  1,95
1,3  0,72  1,35  1,96
1,4  0,72  1,36  1,97
1,5  0,72  1,36  1,97
1,6  0,73  1,37  1,98
1,7  0,73  1,37  1,99
1,8  0,73  1,38  2,00
1,9  0,73  1,38  2,00
2  0,74  1,39  2,01

The second thing I do to numbers in Table 1 is to convert them into money, and more specifically into: a) the amount of transaction-based fee of 5%, collected by the EneFin platform, when b) the amount of capital collected by the suppliers of energy via the EneFin platform. I start by assuming that my customers are not really single people, but households. The numbers in Table 1, referring to single persons, are being divided by 2,6, which is the average size of one household in Portugal.

In the next step, I convert households into energy. Easy. One person in Portugal consumes, for the strictly spoken household use, some 4 288,92 kWh a year. That makes 11 151,20 kWh per household per year. Now, I convert energy into money, which, in financial terms, means €1 338,14a year in futures contracts on energy, at €0,12 per kWh, and €1 226,63in terms of capital invested in the supplier of energy via those complex contracts in the EneFin way. The commission taken by EneFin is 5%*(€1 338,14+ €1 226,63) =  €128,24. Those are the basic steps that both the operator of urban wind turbines, and the EneFin platform will be taking, in this scenario, as they will attract new customers.

People converted into money are shown in Tables 3 and 4, below, respectively as the amount of transaction-based fee collected by EneFin, and as the capital collected by the suppliers of energy via those complex contracts traded at EneFin. As I connect the dots, more specifically tables 2 – 4, I can see that time matters. Each year, out of the three, makes a very distinct phase. During the 1styear, I need to work my ass off, in terms of marketing, to acquire customers very quickly. Still, it does not make much difference, in financial terms, which exact variability of population is the context of me working my ass off. On the other hand, in the 3rdyear, I can be much more respectable in my marketing, I can afford to go easy on customers, and, in the same time, the variability of the local population starts mattering in financial terms.

Table 3

Transaction-based fee collected by EneFin in Lisbon
Variability of the population 1st year 2nd year 3rd year
0,1 € 0,00 € 0,00 € 1,11
0,2 € 1 458,22 € 28 752,43 € 340 124,01
0,3 € 263 195,64 € 1 255 033,65 € 4 246 097,13
0,4 € 1 479 526,18 € 4 618 201,31 € 10 486 926,46
0,5 € 3 039 639,48 € 7 967 324,44 € 15 333 595,20
0,6 € 4 240 693,13 € 10 176 019,80 € 18 027 422,81
0,7 € 4 964 515,36 € 11 321 936,93 € 19 132 083,67
0,8 € 5 320 300,96 € 11 750 639,54 € 19 279 326,77
0,9 € 5 435 424,51 € 11 749 281,67 € 18 901 432,22
1 € 5 404 510,95 € 11 506 577,11 € 18 257 283,50
1,1 € 5 289 424,10 € 11 137 214,92 € 17 494 337,16
1,2 € 5 128 672,87 € 10 708 687,77 € 16 694 429,35
1,3 € 4 945 700,41 € 10 259 985,98 € 15 901 851,61
1,4 € 4 754 575,54 € 9 813 197,53 € 15 139 607,38
1,5 € 4 563 606,09 € 9 380 437,89 € 14 418 674,83
1,6 € 4 377 570,97 € 8 967 947,88 € 13 743 280,35
1,7 € 4 199 088,86 € 8 578 519,11 € 13 113 914,13
1,8 € 4 029 458,58 € 8 212 936,36 € 12 529 062,43
1,9 € 3 869 177,26 € 7 870 840,04 € 11 986 204,76
2 € 3 718 261,64 € 7 551 243,62 € 11 482 385,83

Table 4

Capital collected by the suppliers of energy via EneFin, in Lisbon
Variability of the population 1st year 2nd year 3rd year
0,1  € 0,00  € 0,00  € 10,63
0,2  € 13 948,06  € 275 020,26  € 3 253 324,36
0,3  € 2 517 495,89  € 12 004 537,77  € 40 614 395,82
0,4  € 14 151 834,00  € 44 173 614,09  € 100 308 629,20
0,5  € 29 074 492,97  € 76 208 352,96  € 146 667 559,88
0,6  € 40 562 705,95  € 97 334 772,00  € 172 434 323,50
0,7  € 47 486 146,88  € 108 295 598,06  € 183 000 528,68
0,8  € 50 889 276,10  € 112 396 186,64  € 184 408 925,42
0,9  € 51 990 445,74  € 112 383 198,48  € 180 794 321,60
1  € 51 694 754,11  € 110 061 702,11  € 174 632 966,74
1,1  € 50 593 935,49  € 106 528 711,32  € 167 335 299,39
1,2  € 49 056 331,91  € 102 429 800,96  € 159 684 091,36
1,3  € 47 306 179,81  € 98 137 917,98  € 152 102 996,27
1,4  € 45 478 048,96  € 93 864 336,33  € 144 812 044,65
1,5  € 43 651 404,71  € 89 724 941,73  € 137 916 243,78
1,6  € 41 871 957,84  € 85 779 428,52  € 131 456 019,80
1,7  € 40 164 756,47  € 82 054 498,57  € 125 436 061,23
1,8  € 38 542 223,78  € 78 557 658,50  € 119 841 889,05
1,9  € 37 009 114,98  € 75 285 468,80  € 114 649 394,41
2  € 35 565 590,09  € 72 228 493,11  € 109 830 309,84

 Now, I do one final check. I take the formula of LCOE, or the levelized cost of energy, as shown in the formula below:

LCOE

Symbols in the equation have the following meaning: a) Itis the capital invested in period t b) Mtstands for the cost of maintenance in period t c) Ftsymbolizes the cost of fuel in period t and d) Etis the output of energy in period t. I assume that wind is for free, so my Ftis zero. I further assume that It+ Mtmake a lump sum of capital, acquired by the supplier of energy, and equal to the amounts of capital calculated in Table 4. Thus I take those amounts from Table 4, and I divide each of them by the energy consumed in the corresponding headcount of households. Now, it becomes really strange: whatever the phase in time, and whatever the variability of behaviour assumed in the local population, the thus-computed LCOE is always equal to €0,11. Always! Can you understand? Well, if you do, you are smarter than me, because I don’t. How can so differentiated an array of numbers, in Tables 1 – 4, yield one and the same cost of energy, those €0,11? Honestly, I don’t know.

Calm down, Herr Doktor Wasniewski. This is probably how those Greeks hit their π. Maybe I am hitting another one. I am trying to take another path. I take the number(s) of people from Table 1, I take their average consumption of energy, as official for Portugal – 4 288,92 kWh a year per person – and, finally, I take the 47,81 kilowattsof capacity in one single wind turbine, as described in the patent application no. EP 3 214 303 A1, in Lisbon, with the wind speed 4,47 m/s on average. Yes, you guessed right: I want to calculate the number of such wind turbines needed to supply energy to the given number of people, as shown in Table 1. The numerical result of this particular path of thinking is shown in Table 5 below.

The Devil never sleeps, as we say in Poland. Bloody right. He has just tempted me to take the capital amounts from Table 4 (above) and divide them by the number of turbines from Table 5. Guess what. Another constant. Whatever the exact variability in behaviour, and whatever the year, it is always €46 069,64. I can’t help it, I continue. I take that constant €46 069,64 of capital invested per one turbine, and I divide it by the constant LCOE €0,11 per kWh, and it yields  418 815,60 kWh, or 37,56 households (2,6 person per household) per turbine, in order to make it sort of smooth in numbers.

Table 5

Number of wind turbines needed for the number of customers as in Table 1
Variability of the population 1st year 2nd year 3rd year
0,1 0 0 0
0,2 0 6 71
0,3 55 261 882
0,4 307 959 2 177
0,5 631 1 654 3 184
0,6 880 2 113 3 743
0,7 1 031 2 351 3 972
0,8 1 105 2 440 4 003
0,9 1 129 2 439 3 924
1 1 122 2 389 3 791
1,1 1 098 2 312 3 632
1,2 1 065 2 223 3 466
1,3 1 027 2 130 3 302
1,4 987 2 037 3 143
1,5 948 1 948 2 994
1,6 909 1 862 2 853
1,7 872 1 781 2 723
1,8 837 1 705 2 601
1,9 803 1 634 2 489
2 772 1 568 2 384

Another thing to wrap my mind around. My brain needs some rest. Enough science for today. I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French versionas well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon pageand become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

 

Support this blog

€10.00

At the frontier, with my numbers

And so I am working on two business concepts in parallel. One of them is EneFin, my own idea of a FinTech utility in the market of energy, with a special focus on promoting the development of new, local providers in renewable energies. The other is MedUs, a concept I am developing together with a former student of mine, and this one consists in creating an online platform for managing healthcare services, as well as patients’ medical records, in the out-of-pocket market of medical services.

The basic concept of EneFinis to combine trade in futures contracts on retail supply of electricity, with trade in participatory deeds in the providers of said electricity. My sort of idée fixeis to create a FinTech utility that allows, in turn, creating local networks of energy production and distribution as cooperative structures, where the end-users of energy are, in the same time, shareholders in the local power installations. I want to use FinTech tools in order to extract all the advantages of a cooperative structure (low barriers to entry for new projects and investors, low prices of energy) with those of a typically capitalist one (high liquidity and adaptability).

After a cursory review of the available options in terms of legal and financial schemes (see Traps and loopholesas well as Les séquences, ça me pousse à poser cette sorte des questions), I came up with two provisional conclusions. Firstly, a crypto-currency, internal to EneFin looks like the best way of organising smooth trade in both the futures contracts on energy and the participatory shares in the energy providers. Secondly, the whole business has better chances to survive and thrive if the essential concept of EneFin is being offered to users as a set of specific options in an otherwise much broader trading platform.

EneFin as a business in itself can make profits on trading fees strictly spoken, like a percentage on every transaction, still, if the underlying technological platform develops really well, EneFin could grow an engineering branch, supplying that technology in itself to other organizations. This is an option to take into account in any business with ‘tech’ in its description.

MedUs, on the other hand, is based on the idea that the strictly spoken medical services, I mean the out-of-pocket paid ones, tend to be quite chaotic, at least in the context of European markets. In Europe, most healthcare is being financed via public pooled funds, accompanied by private pooled funds (or via network structures that operate de facto as pooled funds). The out-of-pocket paid healthcare is frequently an emergency or a luxury, usually not the bulk of medical care we use. Medical records generated in the out-of-pocket healthcare are technically there (each doctor has to create a file for a patient, even for one visit), and yet they have sort of a nebular structure: it is bloody hell of a nightmare to recreate your personal, medical history out of these.

The basic concept of MedUs consists in using Blockchain technology in order to create a dynamic ledger medical records. Blockchain acts as an archive in itself, very resilient to unlawful modifications. If my otherwise a bit accidental, dispersed medical visits, paid in the out-of-pocket system, are being arranged and paid via a Blockchain-based platform, it is possible to attach a ledger of medical records to the strictly spoken ledger of transactions. I say ‘possible’ because in that nascent business we still don’t have a clear idea of technological feasibility: Blockchain is cool in simple semantic structures, like cryptocurrencies, but becomes really consuming, in terms of energy and disk-space, if we want to handle large, complex sets of data.

MedUs, as we see it now, is supposed to earn money in three essential ways: a) through trading visit-coupons for private healthcare (i.e. coupons that serve to pay for medical care), in the form of coupons strictly spoken or of a cryptocurrency b) through running a closed platform accessible to medical providers after they pay for the initial software package and a monthly, participatory fee, and c) as a provider of the technology of creating local structures in (a) and (b). I can also see a possible carryover from the EneFin concept to MedUs: new, local providers of healthcare could sell their participatory shares to patients together with those visit-coupons, and thus create cooperative structures in local markets.

In this update I am focusing on one specific issue regarding both concepts, namely on the basic, quantitative market research, which I understand as the study of prices and quantities. My point is that you have two fundamental strategies of developing a new business. Your business can grow as your market grows, for one. That’s the classical approach, to find, for example, with Adam Smith. Still, there are businesses which flourish in slowly dying markets. The market of oil is a good example: there is no prospects for big growth, this is certain, and yet there are companies that still make profits in oil.

In a few past updates, I took something like a cursory set of 13 European countries and I calculated their various, quantitative attributes regarding EneFinand the European market of energy. These countries are: Austria, Switzerland, Czech Republic, Germany, Spain, Estonia, Finland, France, United Kingdom, Netherlands, Norway, Poland, Portugal. I am going to keep my focus on this set of countries and run a comparative market research, in terms of basic prices and quantities, for both concepts (i.e. EneFin and MedUS) together.

Now, I will try to move forward along that narrow crest that separates educational content from strictly spoken market research for business purposes. I want this blog to be educational, so I am going to give some methodological explanations as I run my quantitative analysis, and yet, in the same time, I want material, analytical progress for both business plans. Thus, here we go.

Both concepts address a similar relation suppliers and their customers. Households are the target customers in both cases. As for EneFin, the category of ‘households’ is a bit more flexible: it can encompass small businesses, small local NGOs, and farms as well. Still, in both of those business concepts populationis the most fundamental metric for measuring quantities. I usually reach to the demographics published by the World Bank: this source is quick to dig info out of it (I mean the interface is handy), and, as far as I know, it is reliable. I am a big fan of using demographics in market research, by the way: they can tell us much more than it superficially appears.

Demographic data from the World Bank covers the window since 1960 through 2016. Quantitative market research is about dynamics in time, as well as about cross-sectional differences. Here below, in Table 1, there is a bit of demographic info about my 13 countries:

Table 1 – Demographic analysis

Country Population headcount in 2016 Demographic growth since 1960 through 2016
Austria 8 747 358 24,1%
Switzerland 8 372 098 57,1%
Czech Republic 10 561 633 10,0%
Germany 82 667 685 13,5%
Spain 46 443 959 52,5%
Estonia 1 316 481 8,7%
Finland 5 495 096 24,1%
France 66 896 109 42,9%
United Kingdom 65 637 239 25,3%
Netherlands 17 018 408 48,2%
Norway 5 232 929 46,1%
Poland 37 948 016 28,0%
Portugal 10 324 611 16,6%
Total 366 661 622 29,3%

Good, now what do those demographics tell? In am interested in growth rates in the first place. Anyone who knows at least a little about the demographics of Europe can intuitively grasp the difference between, let’s say, the headcount of Switzerland as compared to that of Germany. On the other hand, growth rates are less intuitive. I start from the bottom line, i.e. from that compound rate of demographic growth in all the 13 countries taken together. It is 29,3% since 1960 through 2016, which makes a CAGR (Compound Annual Growth Rate) equal to CAGR = 29,9% / (2016 – 1959) = 0,51%. Nothing to write home about, really. The whole sample of 13 countries makes quite a placid demographic environment. Yet, the overall placidity is subject to strong cross-sectional disparities. Some countries, like Switzerland, or Spain, display strong demographic growth, whilst others are like really placid in that respect, e.g. Germany.

How does it matter? Good question. If each consecutive generation has a bigger headcount than the preceding one, in each such consecutive generations new social roles are likely to form. The faster the headcount grows, the more pronounced is that aspect of social change. On the other hand, we are talking about populations that grow (or not really) in constant territories. More people in a constant space means greater a density of population, which, in turn, means more social interactions and more learning in one unit of time. Summing up, the rate of demographic growth is one of those (rare) quantitative indicators that reflect true structural change.

Now, we can go a bit wild in our thinking and do something I call ‘social physics’. An elephant running at 10 km per hour represents greater a kinetic energy than a dog running at the same speed. Size matters, and speed matters. The size of the population, combined with its growth rate, makes something like a social force. Below, I am presenting a graph, which, I hope, expresses this line of thinking. In that graph, you can see a structure, where a core of 5 countries (Austria, Finland, Estonia, Czech Republic, and Portugal) sort of huddles against the origin of the manifold, whilst another set of countries sort of maxes out along some kind of frontier, enveloping the edges of the distribution. These max-outs are France and Spain, in the first place, followed by Switzerland and Netherlands on the side of growth, as well as by Germany and UK on the side of numerical size.

Some social phenomena behave like that, i.e. like a subset of frontier cases, clearly differentiating themselves from the subset of core cases. Usually, the best business is to be made at the frontier. Mind you, the entities of such a frontier analysis do not need to be countries: they can be products, business concepts, regions, segments of customers. Whatever differs by absolute size and its rate of change can be observed like that.

Demogr13_1 

My little demographic analysis shows me that whichever of the two projects I think about – EneFin or MedUs – sheer demographics make some countries (the frontier cases) in my set of 13 clearly better markets than others. After demographics, I turn towards metrics pertinent to energy in general, renewable energies, and to the out-of-pocket market in healthcare. I am going to apply consistently that frontier-of-size-versus-growth-rate approach you could see at work in the case of demographic data. Let’s see where it leads me.

As for energy, I start with a classic, namely the final consumption of energy per capita, as published by the World Bank. This metric is given in kg of oil equivalent per person per year. You want to convert it into kilowatt hours, like in electricity? Just multiply it by 11,63. Anyway, I take a pinch of that metric, just enough for those 13 countries, and I multiply it by another one, i.e. by the percentage share of renewable energies in that final consumption, also from the website of the World Bank. I stir both of these with the already measured population, and I have like: final consumption of energy per capita * share of renewable energies * population headcount = total final consumption of renewable energies [tons of oil equivalent per year].

Table 2, below, summarizes the results of that little arithmetical rummaging. Is there another frontier? Hell, yes. Germany and United Kingdom are the clear frontier cases. Looks like whatever anyone would like to do with renewable energies, in that set of 13 countries, Germany and UK are THE markets to go.

Table 2 – National markets of renewable energies

Country Final consumption of renewable energies in 2015, tons of oil equivalent Final consumption of renewable energies, compound growth rate 1990 – 2015
Austria 11 296 981,38 80,7%
Switzerland 6 200 709,18 48,6%
Czech Republic 6 036 384,16 241,0%
Germany 44 301 158,29 501,2%
Spain 19 412 734,75 104,4%
Estonia 1 508 374,57 359,5%
Finland 14 036 145,55 101,8%
France 33 167 337,48 42,3%
United Kingdom 15 682 329,72 1069,6%
Netherlands 4 223 183,03 434,9%
Norway 17 433 243,73 39,8%
Poland 11 267 553,99 336,8%
Portugal 5 996 364,89 32,6%

 Good, time to turn my focus to the other project: MedUs. I take a metric available with the World Health Organization, namely ‘Out-of-Pocket Expenditure (OOPS) per Capita in PPP Int$ constant 2010’.  Before I introduce the data, a bit of my beloved lecturing about what it means. So, ‘PPP’ stands for purchasing power parity. You take a standard basket of goods that most people buy, in the amounts they buy it per year, and you measure the value of that basket, in local currencies of each country, at local prices. You take the coefficient of national income per capita in the given country, and you divide it by the monetary value of that basket. It tells you how many such baskets can your average caput(Latin singular from the plural ‘capita’) purchase for an average chunk of national income. That ratio, or purchasing power, makes two ‘Ps’ out of the three. Now, you take the PP of United States as PP = 1,00 and you measure the PP of each other country against the US one. This is how you get the parity of PPs, or PPP.

PPP is handy for converting monetary aggregates from different countries into a common denominator made of US dollars. When we compare national markets, PPP dollars are better than those calculated with the exchange rates, as the former very largely get rid of local inflation, as well as local idiosyncrasies in pricing. With those international dollars being constant for 2010, inflation is basically kicked out of the model. The final point is that measuring national markets in PPP dollars is almost like measuring quantities, sort of standard units of medical services in this case.

So, I take the OOPS and I multiply it by the headcount of the national population, and I get the aggregate OOPS, for all the national capita taken together, in millions of PPP dollars, constant 2010. You can see the results in Table 3, below, once again approached in terms of the latest size on record (2015 in this case) vs. the compound growth rate (2000 – 2015 for this specific metric, as it is available with WHO). Once again, is there a frontier? Yes, it is made of: United Kingdom, Germany and Spain, followed respectfully by Netherlands, Switzerland and Poland. The others are the core.

Question: how can I identify a frontier without making a graph? Answer: you can once again refer to that concept of social physics. You take the size of the market in each country, or its aggregate OOPS. You compute the share of this national OOPS in the total OOPS of all the 13 countries taken together. This is the relative weight of that country in the sample. Next, you multiply the compound growth rate of the national OOPS by its relative weight and you get the metric in the third numerical column, namely ‘Size-weighted growth rate’. The greater value you obtain in that one, the further from the centre of the manifold, the two variables combined, you would find the given country.

Table 3 – Aggregate Out-Of-Pocket Expenditure on Healthcare

Country Aggregate OOPS in millions of PPP dollars in 2015 Compound growth rate in the aggregate OOPS, 2000 -2015 Size-weighted growth rate
Austria 7 951 105,8% 3,8%
Switzerland 17 802 124,9% 9,9%
Czech Republic 3 862 300,2% 5,2%
Germany 54 822 104,9% 25,7%
Spain 35 816 146,9% 23,5%
Estonia 565 308,6% 0,8%
Finland 4 356 98,5% 1,9%
France 20 569 84,7% 7,8%
United Kingdom 39 935 275,5% 49,1%
Netherlands 11 027 227,7% 11,2%
Norway 4 607 100,3% 2,1%
Poland 15 049 124,1% 8,3%
Portugal 7 622 86,8% 3,0%

 Time to wrap up the writing and serious thinking for today. You had an example of quantitative market analysis, in the form of ‘frontier vs. core’ method. When we talk about the relative attractiveness of different markets, that method, i.e. looking  for frontier markets, is quite logical and straightforward.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French versionas well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon pageand become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Crossbreeds, once they survive the crossbreeding process

 

As a bit of a surprise, I presently have two business plans on the board, instead of just one. A former student of mine asked me to mentor a business project he is starting up with his friend. The basic concept is that of an online platform for managing medical visits, and the innovation consists in using the Blockchain technology to create, for each patient using that functionality, a digital, trusted ledger of all their medical documentation, i.e. their medical visits, diagnoses, treatments received etc. all in one set of data, properly secured and available from any place on Earth.

Additionally, an educational project – a book on the FinTech industry accompanied by an educational toolkit – which I am running with a friend of mine, has gained in maturity and we will be giving it a definitive form. All in all, ideas and projects abound, and I decided to use my blog for conveying as accurate an account of my intellectual journey into all of these three realms. From now on, I am doing my best to weave an interesting story of scientific research out of three distinct stories, namely: a) my EneFinproject b) that medical ledger project, which I provisionally name MedUs, and c) the FinTech educationalpackage.

As for the EneFinproject, in my last update in French, namely in Les séquences, ça me pousse à poser cette sorte des questions, I came to the conclusion that the best way of starting with the EneFin concept is to create or to join an existing generalist trading platform, possibly using a cryptocurrency, such as Katipult, and include in its general features some options, which, in turn, are likely to spur the emergence of new suppliers in renewable energies.

A little pill of update for those who didn’t follow that update in French: I used a technique that data scientists frequently use, and which consists in expressing something we want as a sequence of events, actions and decisions. When I did this with the general concept, to be found in Traps and loopholes, I discovered that at least some potential users of the EneFinfunctionality are likely to have and want a bit more choice and freedom of movement in their financial decisions. I came to the (provisional) conclusion that the strictly spoken EneFinscheme, i.e. promoting the development of new suppliers in renewable energies, will sell better when expressed as a set of financial incentives, placed in the environment of an otherwise general, well-running platform of exchange, rather than as a closed system.

Right now, I am working through the issue of contracts and the legal rules that accompany them. I am deconstructing the typical contracts signed for the supply of energy, in order to have a very precise idea of what should the smart, crypto-coined contracts at EneFinlook like. Contracts are about securing a precise pattern of behaviour from the part of the other contracting party. I want to understand thoroughly the patterns of behaviour, those wanted as well as those unwanted ones, in the relation between a supplier of energy and his customers.

The business plan I am preparing form the MedUsconcept, I am at the phase of evaluating the size and the value of the market, together with defining, progressively, the core business process. Let me present a bit of the initial idea, and a few openings that it creates. The idea has its roots in the observation of the Polish healthcare market, which is a maze of mutually interweaving public funding and private schemes. An average Polish patient seldom can rely exclusively on the public provision of medical care. Frequent blood diagnostics, dental care, post-surgery rehabilitation – sooner or later, you just need to stop waiting for the public funding of these, and pay privately, either in the out-of-pocket formula, or in some kind of pooled funding scheme.

Those entangled, disparate funding patterns results in the dissipation of the patients’ medical records. The initial ides of MedUsis to take the already known functionality of online arrangement of medical appointments, and combine it with the aggregation and proper handling of digitalized medical records. You make an appointment with one doctor via MedUs, you are being diagnosed and treated, then you make an appointment with another doctor, another diagnosis and treatment ensue, and the record of all that is being stored with MedUs.

This is where the Blockchain technology becomes interesting. Blockchain is basically a ledger, and in handling medical records we need, precisely, a ledger. Medical records contain legally sensitive data, and improper handling can lead to a lot of legal trouble. Every single action taken regarding that data has to be properly documented, and secured against fraud. The basic digital architecture of medical records is that of a database, with the identity of the patient as the leading variable.

In those databases, well, s**t happens, let’s face it. I had a good example of that in my own recent experience. As some of you could have read in ‘The dashing drip of Ketonal, or my fundamental questions for the New Year’, due to a complicated chain of events, involving me, some herrings, and the New Year’s party, I spent the New Year’s night in an emergency ward of the district hospital, with the symptoms of acute food poisoning. As I was being released, on the New Year’s day, I had my official discharging documents. In those documents, space and time warped a little. It started with my data, and then I could read that I had been taken in charge three days earlier, in Berlin, with acute cardiac symptoms, and subsequently transferred to the very same hospital, and then, all of a sudden, my own (real) description followed.

As for me, I wouldn’t care, but my wife said: ‘Look, if you have any complications, or if you need any follow up in treatment, that official discharge will matter. Go to that hospital and make them get your records straight’. So I did, and you would really like to see the faces of people, in the hospital’s administration, when I showed them what I am coming with and for. It was that specific ‘Oh, f**k, not again!’ look. They got it straight, and so I stopped being that cardiac patient hospitalized in Berlin, but as far as I know, it all required a little bit of IT acrobatics.

As I described the situation to a friend of mine, an IT engineer, he explained me that this sort of things happen all the time. Our sensitive data is being stored in a lot of databases, and errors happen recurrently. Technically, once they happen, they should be bound to stay happened. Still, what do we have those IT engineers for? What you do, in such a case, is either to run ‘a minor reloading of the database, just to remove some holes in the security systems’, or you deliberately put the system to failure, and reboot it. Both manoeuvres allow miraculous disappearance of embarrassing data. A lot of institutions do it, like hospitals, banks, even ministries, apparently on a recurrent basis. This is, for example, the way that banks hush up the traces of hacking attacks on their customers’ accounts.

Databases with medical records are basically proprietary, i.e. each database has to have a moral entity clearly owning it and being responsible for it. That’s the law. If I use the services of many different medical providers, each of them runs their own database of medical records, and each such database is proprietary, which, in turn, means that my personal medical data is being owned by many entities in the same time. Each of these entities holds one piece of the puzzle, and the law prohibits any sharing between them, basically, unless a chain of official requests for information is being put in motion. As strange as it seems, such a request cannot be issued by the patient, whose medical records are in question. Only doctors can put my dispersed medical records into one whole, and I have no leverage upon that process.

Strange? Absurd? Well, yes, still no more than the promises, which some politicians make during elections. Anyway, that student of mine came up with the idea of using Blockchain to revolutionize the system. There is that digital platform, MedUs, which starts innocently, as a simple device to make appointments for private medical care. Now, revolution begins: each action taken by the patient, and about the patient, via MedUs, is considered as a transaction, to be stored in a ledger powered by the Blockchain technology. The system allows the patient to be effectively in charge of his own medical record, pertaining to all the medical visits, tests, diagnoses and treatments arranged via MedUs.

A sequence comes to my mind. A patient joins the MedUsplatform, and buys a certain number of tokens in its internal cryptocurrency. Let’s call them ‘Celz’. Each Celzcan buy medical services from providers who have joined MedUs. As it is a token of cryptocurrency, each Celzis being followed closely in all its visits and acquaintances: the medical history of the patient is being written in the hash codes of the Celzeshe or she is using in the MedUsplatform.

Crossbreeds, once they survive the crossbreeding process strictly spoken, are the strongest, the meanest, and the toughest players in the game of existence, and so I am crossbreeding my business concepts. The genes (memes?) of EneFingently make their way inside MedUs, and the latter sends small parcels of its intellectual substance into EneFin. Yes, I know, the process of crossbreeding could be a shade more fun, but I am running a respectable scientific blog here. Anyway, strange, cross-bred ideas are burgeoning in my mind. Each subscriber of the EneFinplatform could have all the history of their transactions written into the hash codes of the cryptocurrency used there, and thus the EneFinutility could become something like a CRM system (Customer Relationship Management), where each token held is informative about the past transactions it changed hands in. How would the reading of such data, out of the hash code, work in the (legal) light of General Data Protection Regulation (GDPR)?

On the other hand, why couldn’t patients, who join the MedUsplatform, use their Celzesto buy participation in the balance sheet of those medical providers who wish such a complex deal? Celzes used to buy equity in medical providers could generate extra purchasing power – more Celzes – to pay for medical services.

In both projects, which I am currently preparing business plans for, namely in EneFin, and in MedUs, the Blockchain technology comes as a simplifying solution, for transforming complex sets of transactions, functionally interconnected, into a smooth flow of financial deeds. When I find a common denominator, I tend to look for common patterns. I am asking myself, what do these two ideas have in common. What jumps to my eye is that both pertain to that special zone of social interactions, when an otherwise infrastructural sector of the social system gently turns into something more incidental and mercantile. It is about giving some spin to those portions of the essential energy and healthcare systems, which can tolerate, or even welcome, some movement and liquidity, without compromising social stability.

As I see that similarity, my mind wanders towards that third project I am working on, the book about FinTech. One of the essential questions I have been turning and returning in my head spells: ‘What is FinTech, at the bottom line? What part of FinTech is just digital technology, versus financial innovation in general?’. Those fundamental questions popped in my head some time ago, after some apparently unconnected readings: the Fernand Braudel’s masterpiece book: ‘Civilisation and Capitalism’, ‘The Expression of The Emotions in Man and Animals’ by Charles Darwin, and finally ‘Traité de la circulation et du crédit’ by Isaac da Pinto. It all pushed me towards perceiving financial deeds, and especially money, as some kind of hormones, i.e. systemic conveyors of information about what is currently the best opportunity to jump on.

A hormone is information in solid form, basically, just obtrusive enough to provoke into action, and light enough to be conveyed a long way from the gland it originates from. OK, here I come: gently and quietly, I have drifted towards thinking about the nature and origins of money. Apparently, you cannot be a serious social thinker if you don’t think about it. Mind you, if you just think about the local (i.e. your own) lack of money, you are but a loser. It is only when you ascend beyond your own, personal balance sheet that you become a respectable economist. Karmic economics, sort of.

Being a respectable social thinker does not preclude practical thinking, I hope, and so I am drifting back to business planning, and to the MedUsconcept. My idea is that whatever will be the final span of customers with that online platform, it is going to start in the market of private healthcare, or, as I think about it, peri-healthcare as well (beauty clinics, spa centres, detox facilities etc.). Whatever the exact transactional concept will be finally developed, any payment made by the customers of MedUswill be one of these: a) a margin, paid by the patient over the strictly spoken price of the healthcare purchased b) a margin, paid by the provider of healthcare out of the price they receive from the patient, or, finally, c) a capital expense of the healthcare provider, to be reflected in some assets in their balance sheet. Hence, I need to evaluate the aggregate value of payments made by patients, the distribution of the corresponding expenditure per capita, and the capital investments in the sector. Studying a few cases of healthcare businesses, just to get the hang of their strategies, would do no harm either.

As I browsed through the website of the World Health Organization, I selected 17 indicators which seem relevant to studying the market for MedUs. I list them in Table 1, below. They are given either as straight aggregates (indicators #11 – 17), as per capita coefficients, or as shares in the GDP. When something is per capita, I need to find out about the number of capita, for example with the World Bankand from then on, it is easy: I multiply that thing per capita by the amount of capita in the given country, and I fall on the aggregate. When, on the other hand, I have data in percentages of the GDP, I need the GDP in absolute numbers, and the World Economic Outlook database, by the International Monetary Fund, comes handy in such instances. Once again, simple multiplication follows: % of GDP times GDP equals aggregate.

Table 1 – Selected indicators about national healthcare systems, as provided by the World Health Organization

Indicator #1 Current Health Expenditure (CHE) as % Gross Domestic Product (GDP)
Indicator #2 Health Capital Expenditure (HK) % Gross Domestic Product (GDP)
Indicator #3 Current Health Expenditure (CHE) per Capita in US$
Indicator #4 Domestic Private Health Expenditure (PVT-D) as % Current Health Expenditure (CHE)
Indicator #5 Domestic Private Health Expenditure (PVT-D) per Capita in US$
Indicator #6 Voluntary Financing Arrangements (VFA) as % of Current Health Expenditure (CHE)
Indicator #7 Voluntary Health Insurance (VHI) as % of Current Health Expenditure (CHE)
Indicator #8 Out-of-pocket (OOPS) as % of Current Health Expenditure (CHE)
Indicator #9 Voluntary Financing Arrangements (VFA) per Capita in US$
Indicator #10 Out-of-Pocket Expenditure (OOPS) per Capita in US$
Indicator #11 Voluntary prepayment, in million current US$
Indicator #12 Other domestic revenues n.e.c., in million current US$
Indicator #13 Voluntary health insurance schemes, in million current US$
Indicator #14 NPISH financing schemes (including development agencies), in million current US$
Indicator #15 Enterprise financing schemes, in million current US$
Indicator #16 Household out-of-pocket payment, in million current US$
Indicator #17 Capital health expenditure, in million current US$

 I am wrapping up writing, for today. I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French versionas well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon pageand become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Traps and loopholes

 

My editorial via You Tube

I am focusing on one particular aspect of my EneFinconcept, namely on what exactly will the consumers of electricity acquire under the label of ‘participatory deeds in the supplier of energy’. For those, who have not followed my blog so far, or just haven’t followed along this particular path of my research, I am summing the thing up. In practically all the European countries I have studied, the retail sales of energy, i.e. to its final users, take place at two, very different prices. There is the retail price for households PH, much higher than the retail price for PIpracticed with big institutional consumers. The basic EneFinconcept aims at making energy accessible to households at a price just as low as or close to the PIlevel, and, in the same time, at promoting small, local suppliers of renewable energy. The basic concept is that of complex contracts, which combine a futures contract on the supplies of electricity with the acquisition of participatory deeds in the supplier of that electricity. For a given, small user who consumes QHkilowatt hours, we have QH(t+z)*PH= QH(t+z)*PI+ K(t)and K(t) = QH(t+z)*(PH– PI), where ‘t’ is the present moment in time, ‘t+z’ is a moment in the future, distant from the present by ‘z’ periods, and K(t)is investment capital supplied today, to the provider of electricity, by the means of this complex contract.

EneFin Concept

Now, the issue of those participatory deeds purchased together with the futures contracts on electricity. I am advancing step by step, just to keep an eye on details. So, I need something freely tradable, endowed with high liquidity. EneFinis supposed to be a FinTech business, and FinTech means finance, and finance means giving liquidity, i.e. movement, to the otherwise lazy and stationary capital goods. The imperative of liquid, unimpeded tradability almost automatically kicks out of the concept the non-securitized participatory deeds: cooperative shares in equity, and corporate shares in partnerships. These are tradable, indeed, but at a very slow pace. If you have cooperative shares or those in a partnership, selling them requires a whole procedure of formally expressed consent from the part of other members (in a cooperative) or partners (in a partnership). Can take months, believe me. Problems with selling those types of participatory deeds find their mirroring image in problems with buying them.

Securitized shares in a joint stock company give some hope regarding my concept: they are freely tradable and can be highly liquid if we only want them to. As the aim of the EneFinproject is to promote new suppliers of renewable energies, or the creation of new capacity in the existing suppliers, the first issuance of those complex contracts (futures on energy + capital participation) would be like an Initial Offering of corporate stock. I see an opening here, yet with some limitations. As soon as I offer my stock to a sufficiently large number of prospective buyers, my initial offering becomes an Initial Public Offering, and my stock falls under the regulations pertaining to the public exchange of corporate stock. The ‘sufficiently large number’ depends on the exact legal regime we are talking about, but is does not need to be that large. The relevant regulations of my home country, Poland, assume a public offering as soon as more than 300 buyers are being addressed. The targeted size of the customer population in the EneFinproject depends on the country of operations, but even for a really small, 1 MW local power installation, it takes certainly more than 300 (see This is how I got the first numerical column).

The thing is that in the legally understood public exchange of corporate stock I can trade only that stock. A complex contract in my line of thinking – futures on energy plus participatory deeds – would require, in such a case, to carry out two separate transactions in two separate markets: one transaction in the market of futures contracts, and another one in the public stock exchange. Maybe it is feasible, but looks sort of clumsy. Mind you, what looks clumsy when handled simultaneously can gain in gracefulness when turned into a sequence. First, I buy futures on energy, and then I present them to my provider, and they give me their corporate stock. Or another way round: first, I buy the stock of that provider, in an IPO, and then, with that stock in hand, I claim my futures on energy. That looks better. I’ll keep that avenue in mind.

Another caveat that comes together with the public exchange of corporate stock is that only licensed brokerage houses can do it. In the EneFinproject, that would mean the necessity of signing a contract with such a licensed entity. Right, if I have professional stock brokers in the game, I can entertain another option, that of offering that stock in secondary exchange, not in an IPO. A provider of energy does an ordinary IPO in the stock market, their stock comes into the system. Then, they offer the following deal: they buy their stock back and they redeem it, and they pay for it with those futures on energy. With good pricing, could be worth some further thinking.

Everything I have passed in review so far pertains to the equity of the energy provider. I might venture myself into the realm of debt, now. Customers can participate in the balance sheet of their provider via what the French call ‘the bottom part’, namely via liabilities. Along with the futures on energy, customers can acquire bonds or bills of exchange of some kind. Fixed interest rate, no headache about future profits in that energy provider, only some headache left about future liquidity. Debt has the reputation of being more disciplining for the corporate executives than equity.

F**k (spell ‘f-asterisk-asterisk-k’), my mind starts racing. I imagine a transactional platform, where customers buy futures contracts on energy, accompanied by a capital deed of their choice. I buy some kilowatt hours for my future Christmas cooking (serious business over here, in Poland, trust me), and the platform offers me choice. ‘Maybe sir would dare to have a look at those wonderful corporate shares, quite fresh, issued only two months ago, or maybe sir wants to consider choosing that basket with half corporate bonds, half government bonds inside, very solid, sir. Holds money well, sir. If sir is in a genuinely adventurous mood, sir could contemplate to mix Bitcoins with some corporate stock, peppered with a pinch of corporate options, and some futures on gold’.

Right, now I understand the deep logic of the business concept introduced by that Canadian company: Katipult. They have created a financial structure made of an investment fund, whose participatory shares are being converted into a cryptocurrency traded at their internal transactional platform. I understand, too, why they pride themselves with the number of distinct legal regimes they have adapted their scheme to. I see that I should follow the legal regime of my market very closely, in order to find traps and loopholes.

My mind keeps racing. There are three, internally structured and mutually connected sets of financial deeds: a) A set of futures contracts on energy, priced at the retail, non-household rate b) A set of capital deeds issued by the providers of energy, and c) A set of tokens, in some cryptocurrency, which can be purchased for the price of energy at the retail, household rate, and give a claim on both the energy futures and the capital deeds.

A new customer enters that transactional platform and buys a certain number of tokens. Each token can be converted, at a given exchange rate, against the futures on energy and/or the capital deeds. The customer can present the basket of tokens they are holding to any provider of energy registered with that platform, and make a choice of futures on energy and capital deeds.

I think I am progressively coming up with the core process for the EneFin project. Here, below, I am giving its first graphical representation.

EneFin Core Process First Approach

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French versionas well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon pageand become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?