Another idea – urban wetlands

My editorial on You Tube

I have just come with an idea. One of those big ones, the kind that pushes you to write a business plan and some scientific stuff as well. Here is the idea: a network of ponds and waterways, made in the close vicinity of a river, being both a reservoir of water – mostly the excess rainwater from big downpours – and a location for a network of small water turbines. The idea comes from a few observations, as well as other ideas, that I had over the last two years. Firstly. in Central Europe, we have less and less water from the melting snow – as there is almost no snow anymore in winter – and more and more water from sudden, heavy rain. We need to learn how to retain rainwater in the most efficient way. Secondly, as we have local floods due to heavy rains, some sort of spontaneous formation of floodplains happens. Even if there is no visible pond, the ground gets a bit spongy and soaked, flood after flood. We have more and more mosquitoes. If it is happening anyway, let’s use it creatively. This particular point is visualised in the map below, with the example of Central and Southern Europe. Thus, my idea is to utilise purposefully a naturally happening phenomenon, component of climate change.

Source: https://www.eea.europa.eu/data-and-maps/figures/floodplain-distribution last access June 20th, 2019

Thirdly, there is some sort of new generation in water turbines: a whole range of small devices, simple and versatile, has come to the market.  You can have a look at what those guys at Blue Freedom are doing. Really interesting. Hydroelectricity can now be approached in an apparently much less capital-intensive way. Thus, the idea I have is to arrange purposefully the floodplains we have in Europe into as energy-efficient and carbon-efficient places as possible. I give the general idea graphically in the picture below.

I am approaching the whole thing from the economics’ point of view, i.e. I want a piece of floodplain arranged into this particular concept to have more value, financial value included, than the same piece of floodplain just being ignored in its inherent potential. I can see two distinct avenues for developing the concept: that of a generally wild, uninhabited floodplain, like public land, as opposed to an inhabited floodplain, under incumbent or ongoing construction, residential or other. The latter is precisely what I want to focus on. I want to study, and possibly to develop a business plan for a human habitat combined with a semi-aquatic ecosystem, i.e. a network of ponds, waterways and water turbines in places where people live and work. Hence, from the geographic point of view, I am focusing on places where the secondary formation of floodplain-type of terrain already occurs in towns and cities, or in the immediate vicinity thereof. For more than one century, the growth of urban habitats has been accompanied by the entrenching of waterways in strictly defined, concrete-reinforced beds. I want to go the other way, and let those rivers spill around their waters, into wetlands, in a manner beneficial to human dwelling.

My initial approach to the underlying environmental concept is market based. Can we create urban wetlands, in flood-threatened areas, where the presence of the explicitly and purposefully arranged aquatic structures increases the value of property so as to top the investment required? I start with the most fundamental marks in the environment. I imagine a piece of land in an urban area. It has its present market value, and I want to study its possible value in the future.

I imagine a piece of land located in an urban area with the characteristics of a floodplain, i.e. recurrently threatened by local floods or the secondary effects thereof. At the moment ‘t’, that piece of land has a market value M(t) = S * m(t), being the product of its total surface S, constant over time, and the market price m(t) per unit of surface, changing over time. There are two moments in time, i.e. the initial moment t0, and the subsequent moment t1, after the development into urban wetland. Said development requires a stream of investment I(t0 -> t1). I want to study the conditions for M(t1) – M(t0) > I(t0 -> t1). As surface S is constant over time, my problem breaks down into units of surface, whence the aggregate investment I(t0 -> t1) being decomposed into I(t0 -> t1) = S * i(t0 -> t1), and the problem restated as m(t1) – m(t0) >  i(t0 -> t1).

I assume the market price m(t) is based on two types of characteristics: those directly measurable as financials, for one, e.g. the average wage a resident can expect from a locally based job, and those more diffuse ones, whose translation into financial variables is subtler, and sometimes pointless. I allow myself to call the latter ones ‘environmental services’. They cover quite a broad range of phenomena, ranging from the access to clean water outside the public water supply system, all the way to subjectively perceived happiness and well-being. All in all, mathematically, I say m(t) = f(x1, x2, …, xk) : the market price of construction land in cities is a function of k variables. Consistently with the above, I assume that f[t1; (x1, x2, …, xk)] – f[t0; (x1, x2, …, xk)] > i(t0 -> t1).    

It is intellectually honest to tackle those characteristics of urban land that make its market price. There is a useful observation about cities: anything that impacts the value of urban real estate, sooner or later translates into rent that people are willing to pay for being able to stay there. Please, notice that even when we own a piece of real estate, i.e. when we have property rights to it, we usually pay to someone some kind of periodic allowance for being able to execute our property rights fully: the real estate tax, the maintenance fee paid to the management of residential condominiums, the fee for sanitation service (e.g. garbage collection) etc. Any urban piece of land has a rent tag attached. Even those characteristics of a place, which pertain mostly to the subjectively experienced pleasure and well-being derived out of staying there have a rent-like price attached to them, at the end of the day.

Good. I have made a sketch of the thing. Now, I am going to pass in review some published research, in order to set my landmarks. I start with some literature regarding urban planning, and as soon as I do so, I discover an application for artificial intelligence, a topic of interest for me, those last months. Lyu et al. (2017[1]) present a method for procedural modelling of urban layout, and in their work, I can spot something similar to the equations I have just come up with: complex analysis of land-suitability. It starts with dividing the total areal of urban land at hand, in a given city, into standard units of surface. Geometrically, they look nice when they are equisized squares. Each unit ‘i’ can be potentially used for many alternative purposes. Lyu et al. distinguish 5 typical uses of urban land: residential, industrial, commercial, official, and open & green. Each such surface unit ‘i’ is endowed with a certain suitability for different purposes, and this suitability is the function of a finite number of factors. Formally, the suitability sik of land unit i for use k is a weighted average over a vector of factors, where wkj is the weight of factor j for land use k, and rij is the rating of land unit i on factor j. Below, I am trying to reproduce graphically the general logic of this approach.

In a city approached analytically with the general method presented above, Lyu et al. (2017[1]) distribute three layers of urban layout: population, road network, and land use. It starts with an initial state (input state) of population, land use, and available area. In a first step of the procedure, a simulation of highways and arterial transport connections is made. The transportation grid suggests some kind of division of urban space into districts. As far as I understand it, Lyu et al. define districts as functional units with the quantitative dominance of certain land uses, i.e. residential vs. industrial rather than rich folks’ estate vs. losers’ end, sort of.

As a first sketch of district division is made, it allows simulating a first distribution of population in the city, and a first draft of land use. The distribution of population is largely a distribution of density in population, and the corresponding transportation grid is strongly correlated with it. Some modes of urban transport work only above some critical thresholds in the density of population. This is an important point: density of population is a critical variable in social sciences.

Then, some kind of planning freedom can be allowed inside districts, which results in a second draft of spatial distribution in population, where a new type of unit – a neighbourhood – appears. Lyu et al. do not explain in detail the concept of neighbourhood, and yet it is interesting. It suggests the importance of spontaneous settlement vs. that of planned spatial arrangement.

I am strongly attached to that notion of spontaneous settlement. I am firmly convinced that on the long run people live where they want to live, and urban planning can just make that process somehow smoother and more efficient. Thus comes another article in my review of literature, by Mahmoud & Divigalpitiya (2019[2]). By the way, I have an interesting meta-observation: most recent literature about urban development is based on empirical research in emerging economies and in developing countries, with the U.S. coming next, and Europe lagging far behind. In Europe, we do very little research about our own social structures, whilst them Egyptians or Thais are constantly studying the way they live collectively.

Anyway, back to by Mahmoud & Divigalpitiya (2019[3]), the article is interesting from my point of view because its authors study the development of new towns and cities. For me, it is an insight into how the radically new urban structures sink into the incumbent spatial distribution of population. The specific background of this particular study is a public policy of the Egyptian government to establish, in a planned manner, new cities some distance away from the Nile, and do it so as to minimize the encroachment on agricultural land. Thus, we have scarce space and people to fit into, with optimal use of land.

As I study that paper by Mahmoud & Divigalpitiya, some kind of extension to my initial idea emerges. Those researchers report that with proper water and energy management, more specifically with the creation of irrigative structures like those which I came up with – networks of ponds and waterways – paired with a network of small hydropower units, it is possible both to accommodate an increase of 90% in local urban population, and create 3,75% more of agricultural land. Another important finding about those new urban communities in Egypt is that they tend to grow by sprawl rather than by distant settlement. New city dwellers tend to settle close to the incumbent residents, rather than in more remote locations. In simple words: it is bloody hard to create a new city from scratch. Habits and social links are like a tangible expanse of matter, which opposes resistance to distortions.

I switch to another paper based on Egyptian research, namely that by Hatata et al. 2019[4], relative to the use of small hydropower generators. The paper is rich in technicalities, and therefore I note to come back to it many times when I will be going more into the details of my concept. For now, I have a few general takeaways. Firstly, it is wise to combine small hydro off grid with that connected to the power grid, and more generally, small hydro looks like a good complementary source of power, next to a regular grid, rather than a 100% autonomous power base. Still, full autonomy is possible, mostly with the technology of Permanent Magnet Synchronous Generator. Secondly, Hatata et al. present a calculation of economic value in hydropower projects, based on their Net Present Value, which, in turn, is calculated on the grounds of a basic assumption that hydropower installations carry some residual capital value Vr over their entire lifetime, and additionally can generate a current cash flow determined by: a) the revenue Rt from the sales of energy b) the locally needed investment It c) the operating cost Ot and d) the maintenance cost Mt, all that in the presence of a periodic discount rate r.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Lyu, X., Han, Q., & de Vries, B. (2017). Procedural modeling of urban layout: population, land use, and road network. Transportation research procedia, 25, 3333-3342.

[2] Mahmoud, H., & Divigalpitiya, P. (2019). Spatiotemporal variation analysis of urban land expansion in the establishment of new communities in Upper Egypt: A case study of New Asyut city. The Egyptian Journal of Remote Sensing and Space Science, 22(1), 59-66.

[3] Mahmoud, H., & Divigalpitiya, P. (2019). Spatiotemporal variation analysis of urban land expansion in the establishment of new communities in Upper Egypt: A case study of New Asyut city. The Egyptian Journal of Remote Sensing and Space Science, 22(1), 59-66.

[4] Hatata, A. Y., El-Saadawi, M. M., & Saad, S. (2019). A feasibility study of small hydro power for selected locations in Egypt. Energy Strategy Reviews, 24, 300-313.


Sketching quickly alternative states of nature

My editorial on You Tube

I am thinking about a few things, as usually, and, as usually, it is a laborious process. The first one is a big one: what the hell am I doing what I am doing for? I mean, what’s the purpose and the point of applying artificial intelligence to simulating collective intelligence? There is one particular issue that I am entertaining in this regard: the experimental check. A neural network can help me in formulating very precise hypotheses as for how a given social structure can behave. Yet, these are hypotheses. How can I have them checked?

Here is an example. Together with a friend, we are doing some research about the socio-economic development of big cities in Poland, in the perspective of seeing them turning into so-called ‘smart cities’. We came to an interesting set of hypotheses generated by a neural network, but we have a tiny little problem: we propose, in the article, a financial scheme for cities but we don’t quite understand why we propose this exact scheme. I know it sounds idiotic, but well: it is what it is. We have an idea, and we don’t know exactly where that idea came from.

I have already discussed the idea in itself on my blog, in « Locally smart. Case study in finance.» : a local investment fund, created by the local government, to finance local startup businesses. Business means investment, especially at the aggregate scale and in the long run. This is how business works: I invest, and I have (hopefully) a return on my investment. If there is more and more private business popping up in those big Polish cities, and, in the same time, local governments are backing off from investment in fixed assets, let’s make those business people channel capital towards the same type of investment that local governments are withdrawing from. What we need is an institutional scheme where local governments financially fuel local startup businesses, and those businesses implement investment projects.

I am going to try and deconstruct the concept, sort of backwards. I am sketching the landscape, i.e. the piece of empirical research that brought us to formulating the whole idea of investment fund paired with crowdfunding.  Big Polish cities show an interesting pattern of change: local populations, whilst largely stagnating demographically, are becoming more and more entrepreneurial, which is observable as an increasing number of startup businesses per 10 000 inhabitants. On the other hand, local governments (city councils) are spending a consistently decreasing share of their budgets on infrastructural investment. There is more and more business going on per capita, and, in the same time, local councils seem to be slowly backing off from investment in infrastructure. The cities we studied as for this phenomenon are: Wroclaw, Lodz, Krakow, Gdansk, Kielce, Poznan, Warsaw.

More specifically, the concept tested through the neural network consists in selecting, each year, 5% of the most promising local startups, and funds each of them with €80 000. The logic behind this concept is that when a phenomenon becomes more and more frequent – and this is the case of startups in big Polish cities – an interesting strategy is to fish out, consistently, the ‘crème de la crème’ from among those frequent occurrences. It is as if we were soccer promotors in a country, where more and more young people start playing at a competitive level. A viable strategy consists, in such a case, in selecting, over and over again, the most promising players from the top of the heap and promote them further.

Thus, in that hypothetical scheme, the local investment fund selects and supports the most promising from amongst the local startups. Mind you, that 5% rate of selection is just an idea. It could be 7% or 3% just as well. A number had to be picked, in order to simulate the whole thing with a neural network, which I present further. The 5% rate can be seen as an intuitive transference from the s-Student significance test in statistics. When you test a correlation for its significance, with the t-Student test, you commonly assume that at least 95% of all the observations under scrutiny is covered by that correlation, and you can tolerate a 5% outlier of fringe cases. I suppose this is why we picked, intuitively, that 5% rate of selection among the local startups: 5% sounds just about right to delineate the subset of most original ideas.

Anyway, the basic idea consists in creating a local investment fund controlled by the local government, and this fund would provide a standard capital injection of €80 000 to 5% of most promising local startups. The absolute number STF (i.e. financed startups) those 5% translate into can be calculated as: STF = 5% * (N/10 000) * ST10 000, where N is the population of the given city, and ST10 000 is the coefficient of startup businesses per 10 000 inhabitants. Just to give you an idea what it looks like empirically, I am presenting data for Krakow (KR, my hometown) and Warsaw (WA, Polish capital), in 2008 and 2017, which I designate, respectively, as STF(city_acronym; 2008) and STF(city_acronym; 2017). It goes like:

STF(KR; 2008) = 5% * (754 624/ 10 000) * 200 = 755

STF(KR; 2017) = 5* * (767 348/ 10 000) * 257 = 986

STF(WA; 2008) = 5% * (1709781/ 10 000) * 200 = 1 710

STF(WA; 2017) = 5% * (1764615/ 10 000) * 345 = 3 044   

That glimpse of empirics allows guessing why we applied a neural network to that whole thing: the two core variables, namely population and the coefficient of startups per 10 000 people, can change with a lot of autonomy vis a vis each other. In the whole sample that we used for basic stochastic analysis, thus 7 cities from 2008 through 2017 equals 70 observations, those two variables are Pearson-correlated at r = 0,6267. There is some significant correlation, and yet some 38% of observable variance in each of those variables doesn’t give a f**k about the variance of the other variable. The covariance of these two seems to be dominated by the variability in population rather than by uncertainty as for the average number of startups per 10 000 people.

What we have is quite predictable a trend of growing propensity to entrepreneurship, combined with a bit of randomness in demographics. Those two can come in various duos, and their duos tend to be actually trios, ‘cause we have that other thing, which I already mentioned: investment outlays of local governments and the share of those outlays in the overall local budgets. Our (my friend’s and mine) intuitive take on that picture was that it is really interesting to know the different ways those Polish cities can go in the future, rather that setting one central model. I mean, the central stochastic model is interesting too. It says, for example, that the natural logarithm of the number of startups per 10 000 inhabitants, whilst being negatively correlated with the share of investment outlays in the local government’s budget, it is positively correlated with the absolute amount of those outlays. The more a local government spends on fixed assets, the more startups it can expect per 10 000 inhabitants. That latter variable is subject to some kind of scale effects from the part of the former. Interesting. I like scale effects. They are intriguing. They show phenomena, which change in a way akin to what happens when I heat up a pot full of water: the more heat have I supplied to water, the more different kinds of stuff can happen. We call it increase in the number of degrees of freedom.

The stochastically approached degrees of freedom in the coefficient of startups per 10 000 inhabitants, you can see them in Table 1, below. The ‘Ln’ prefix means, of course, natural logarithms. Further below, I return to the topic of collective intelligence in this specific context, and to using artificial intelligence to simulate the thing.

Table 1

Explained variable: Ln(number of startups per 10 000 inhabitants) R2 = 0,608 N = 70
Explanatory variable Coefficient of regression Standard error Significance level
Ln(investment outlays of the local government) -0,093 0,048 p = 0,054
Ln(total budget of the local government) 0,565 0,083 p < 0,001
Ln(population) -0,328 0,09 p < 0,001
Constant    -0,741 0,631 p = 0,245

I take the correlations from Table 1, thus the coefficients of regression from the first numerical column, and I check their credentials with the significance level from the last numerical column. As I want to understand them as real, actual things that happen in the cities studied, I recreate the real values. We are talking about coefficients of startups per 10 000 people, comprised somewhere the observable minimum ST10 000 = 140, and the maximum equal to ST10 000 = 345, with a mean at ST10 000 = 223. It terms of natural logarithms, that world folds into something between ln(140) = 4,941642423 and ln(345) = 5,843544417, with the expected mean at ln(223) = 5,407171771. Standard deviation Ω from that mean can be reconstructed from the standard error, which is calculated as s = Ω/√N, and, consequently, Ω = s*√N. In this case, with N = 70, standard deviation Ω = 0,631*√70 = 5,279324767.  

That regression is interesting to the extent that it leads to an absurd prediction. If the population of a city shrinks asymptotically down to zero, and if, in the same time, the budget of the local government swells up to infinity, the occurrence of entrepreneurial behaviour (number of startups per 10 000 inhabitants) will tend towards infinity as well. There is that nagging question, how the hell can the budget of a local government expand when its tax base – the population – is collapsing. I am an economist and I am supposed to answer questions like that.

Before being an economist, I am a scientist. I ask embarrassing questions and then I have to invent a way to give an answer. Those stochastic results I have just presented make me think of somehow haphazard a set of correlations. Such correlations can be called dynamic, and this, in turn, makes me think about the swarm theory and collective intelligence (see Yang et al. 2013[1] or What are the practical outcomes of those hypotheses being true or false?). A social structure, for example that of a city, can be seen as a community of agents reactive to some systemic factors, similarly to ants or bees being reactive to pheromones they produce and dump into their social space. Ants and bees are amazingly intelligent collectively, whilst, let’s face it, they are bloody stupid singlehandedly. Ever seen a bee trying to figure things out in the presence of a window? Well, not only can a swarm of bees get that s**t down easily, but also, they can invent a way of nesting in and exploiting the whereabouts of the window. The thing is that a bee has its nervous system programmed to behave smartly mostly in social interactions with other bees.

I have already developed on the topic of money and capital being a systemic factor akin to a pheromone (see Technological change as monetary a phenomenon). Now, I am walking down this avenue again. What if city dwellers react, through entrepreneurial behaviour – or the lack thereof – to a certain concentration of budgetary spending from the local government? What if the budgetary money has two chemical hooks on it – one hook observable as ‘current spending’ and the other signalling ‘investment’ – and what if the reaction of inhabitants depends on the kind of hook switched on, in the given million of euros (or rather Polish zlotys, or PLN, as we are talking about Polish cities)?

I am returning, for a moment, to the negative correlation between the headcount of population, on the one hand, and the occurrence of new businesses per 10 000 inhabitants. Cities – at least those 7 Polish cities that me and my friend did our research on – are finite spaces. Less people in the city means less people per 1 km2 and vice versa. Hence, the occurrence of entrepreneurial behaviour is negatively correlated with the density of population. A behavioural pattern emerges. The residents of big cities in Poland develop entrepreneurial behaviour in response to greater a concentration of current budgetary spending by local governments, and to lower a density of population. On the other hand, greater a density of population or less money spent as current payments from the local budget act as inhibitors of entrepreneurship. Mind you, greater a density of population means greater a need for infrastructure – yes, those humans tend to crap and charge their smartphones all over the place – whence greater a pressure on the local governments to spend money in the form of investment in fixed assets, whence the secondary in its force, negative correlation between entrepreneurial behaviour and investment outlays from local budgets.

This is a general, behavioural hypothesis. Now, the cognitive challenge consists in translating the general idea into as precise empirical hypotheses as possible. What precise states of nature can happen in those cities? This is when artificial intelligence – a neural network – can serve, and this is when I finally understand where that idea of investment fund had come from. A neural network is good at producing plausible combinations of values in a pre-defined set of variables, and this is what we need if we want to formulate precise hypotheses. Still, a neural network is made for learning. If I want the thing to make those hypotheses for me, I need to give it a purpose, i.e. a variable to optimize, and learn as it is optimizing.

In social sciences, entrepreneurial behaviour is assumed to be a good thing. When people recurrently start new businesses, they are in a generally go-getting frame of mind, and this carries over into social activism, into the formation of institutions etc. In an initial outburst of neophyte enthusiasm, I might program my neural network so as to optimize the coefficient of startups per 10 000 inhabitants. There is a catch, though. When I tell a neural network to optimize a variable, it takes the most likely value of that variable, thus, stochastically, its arithmetical average, and it keeps recombining all the other variables so as to have this one nailed down, as close to that most likely value as possible. Therefore, if I want a neural network to imagine relatively high occurrences of entrepreneurial behaviour, I shouldn’t set said behaviour as the outcome variable. I should mix it with others, as an input variable. It is very human, by the way. You brace for achieving a goal, you struggle the s**t out of yourself, and you discover, with negative amazement, that instead of moving forward, you are actually repeating the same existential pattern over and over again. You can set your personal compass, though, on just doing a good job and having fun with it, and then, something strange happens. Things get done sort of you haven’t even noticed when and how. Goals get nailed down even without being phrased explicitly as goals. And you are having fun with the whole thing, i.e. with life.

Same for artificial intelligence, as it is, as a matter of fact, an artful expression of our own, human intelligence: it produces the most interesting combinations of variables as a by-product of optimizing something boring. Thus, I want my neural network to optimize on something not-necessarily-fascinating and see what it can do in terms of people and their behaviour. Here comes the idea of an investment fund. As I have been racking my brains in the search of place where that idea had come from, I finally understood: an investment fund is both an institutional scheme, and a metaphor. As a metaphor, it allows decomposing an aggregate stream of investment into a set of more or less autonomous projects, and decisions attached thereto. An investment fund is a set of decisions coordinated in a dynamically correlated manner: yes, there are ways and patterns to those decisions, but there is a lot of autonomous figuring-out-the-thing in each individual case.

Thus, if I want to put functionally together those two social phenomena – investment channelled by local governments and entrepreneurial behaviour in local population – an investment fund is a good institutional vessel to that purpose. Local government invests in some assets, and local homo sapiens do the same in the form of startups. What if we mix them together? What if the institutional scheme known as public-private partnership becomes something practiced serially, as a local market for ideas and projects?

When we were designing that financial scheme for local governments, me and my friend had the idea of dropping a bit of crowdfunding into the cooking pot, and, as strange as it could seem, we are bit confused as for where this idea came from. Why did we think about crowdfunding? If I want to understand how a piece of artificial intelligence simulates collective intelligence in a social structure, I need to understand what kind of logical connections had I projected into the neural network. Crowdfunding is sort of spontaneous. When I am having a look at the typical conditions proposed by businesses crowdfunded at Kickstarter or at StartEngine, these are shitty contracts, with all the due respect. Having a Master’s in law, when I look at the contracts offered to investors in those schemes, I wouldn’t sign such a contract if I had any room for negotiation. I wouldn’t even sign a contract the way I am supposed to sign it via a crowdfunding platform.

There is quite a strong piece of legal and business science to claim that crowdfunding contracts are a serious disruption to the established contractual patterns (Savelyev 2017[2]). Crowdfunding largely rests on the so-called smart contracts, i.e. agreements written and signed as software on Blockchain-based platforms. Those contracts are unusually flexible, as each amendment, would it be general or specific, can be hash-coded into the history of the individual contractual relation. That puts a large part of legal science on its head. The basic intuition of any trained lawyer is that we negotiate the s**t of ourselves before the signature of the contract, thus before the formulation of general principles, and anything that happens later is just secondary. With smart contracts, we are pretty relaxed when it comes to setting the basic skeleton of the contract. We just put the big bones in, and expect we gonna make up the more sophisticated stuff as we go along.

With the abundant usage of smart contracts, crowdfunding platforms have peculiar legal flexibility. Today you sign up for having a discount of 10% on one Flower Turbine, in exchange of £400 in capital crowdfunded via a smart contract. Next week, you learn that you can turn your 10% discount on one turbine into 7% on two turbines if you drop just £100 more into that pig coin. Already the first step (£400 against the discount of 10%) would be a bit hard to squeeze into classical contractual arrangements as for investing into the equity of a business, let alone the subsequent amendment (Armour, Enriques 2018[3]).

Yet, with a smart contract on a crowdfunding platform, anything is just a few clicks away, and, as astonishing as it could seem, the whole thing works. The click-based smart contracts are actually enforced and respected. People do sign those contracts, and moreover, when I mentally step out of my academic lawyer’s shoes, I admit being tempted to sign such a contract too. There is a specific behavioural pattern attached to crowdfunding, something like the Russian ‘Davaj, riebiata!’ (‘Давай, ребята!’ in the original spelling). ‘Let’s do it together! Now!’, that sort of thing. It is almost as I were giving someone the power of attorney to be entrepreneurial on my behalf. If people in big Polish cities found more and more startups, per 10 000 residents, it is a more and more recurrent manifestation of entrepreneurial behaviour, and crowdfunding touches the very heart of entrepreneurial behaviour (Agrawal et al. 2014[4]). It is entrepreneurship broken into small, tradable units. The whole concept we invented is generally placed in the European context, and in Europe crowdfunding is way below the popularity it has reached in North America (Rupeika-Aboga, Danovi 2015[5]). As a matter of fact, European entrepreneurs seem to consider crowdfunding as really a secondary source of financing.

Time to sum up a bit all those loose thoughts. Using a neural network to simulate collective behaviour of human societies involves a few deep principles, and a few tricks. When I study a social structure with classical stochastic tools and I encounter strange, apparently paradoxical correlations between phenomena, artificial intelligence may serve. My intuitive guess is that a neural network can help in clarifying what is sometimes called ‘background correlations’ or ‘transitive correlations’: variable A is correlated with variable C through the intermediary of variable B, i.e. A is significantly correlated with B, and B is significantly correlated with C, but the correlation between A and C remains insignificant.

When I started to use a neural network in my research, I realized how important it is to formulate very precise and complex hypotheses rather than definitive answers. Artificial intelligence allows to sketch quickly alternative states of nature, by gazillions. For a moment, I am leaving the topic of those financial solutions for cities, and I return to my research on energy, more specifically on energy efficiency. In a draft article I wrote last autumn, I started to study the relative impact of the velocity of money, as well as that of the speed of technological change, upon the energy efficiency of national economies. Initially, I approached the thing in the nicely and classically stochastic a way. I came up with conclusions of the type: ‘variance in the supply of money makes 7% of the observable variance in energy efficiency, and the correlation is robust’. Good, this is a step forward. Still, in practical terms, what does it give? Does it mean that we need to add money to the system in order to have greater an energy efficiency? Might well be the case, only you don’t add money to the system just like that, ‘cause most of said money is account money on current bank accounts, and the current balances of those accounts reflect the settlement of obligations resulting from complex private contracts. There is no government that could possibly add more complex contracts to the system.

Thus, stochastic results, whilst looking and sounding serious and scientific, have remote connexion to practical applications. On the other hand, if I take the same empirical data and feed it into a neural network, I get alternative states of nature, and those states are bloody interesting. Artificial intelligence can show me, for example, what happens to energy efficiency if a social system is more or less conservative in its experimenting with itself. In short, artificial intelligence allows super-fast simulation of social experiments, and that simulation is theoretically robust.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Yang, X. S., Cui, Z., Xiao, R., Gandomi, A. H., & Karamanoglu, M. (2013). Swarm intelligence and bio-inspired computation: theory and applications.

[2] Savelyev, A. (2017). Contract law 2.0:‘Smart’contracts as the beginning of the end of classic contract law. Information & Communications Technology Law, 26(2), 116-134.

[3] Armour, J., & Enriques, L. (2018). The promise and perils of crowdfunding: Between corporate finance and consumer contracts. The Modern Law Review, 81(1), 51-84.

[4] Agrawal, A., Catalini, C., & Goldfarb, A. (2014). Some simple economics of crowdfunding. Innovation Policy and the Economy, 14(1), 63-97

[5] Rupeika-Apoga, R., & Danovi, A. (2015). Availability of alternative financial resources for SMEs as a critical part of the entrepreneurial eco-system: Latvia and Italy. Procedia Economics and Finance, 33, 200-210.

Mémoires du cycliste reconverti

Mon éditorial sur You Tube

Je réfléchis sur les tendances que j’observe dans le secteur d’énergie. Je reformule ce que je viens de signaler dans « Lean, climbing trends » : le côté consommation d’énergie change selon un schéma très différent du côté production d’énergie. Côté consommation, nous pouvons observer des tendances relativement stables et croissantes, centrées autour deux indicateurs : de la consommation d’énergie par tête d’habitant et du pourcentage de la population avec accès à l’électricité. Côté production, c’est structurellement différent. Les carburants fossiles, le nucléaire, l’hydraulique, l’éolien, le solaire : notre activité agrégée avec toutes ces sources d’énergie semble être un assemblage un peu aléatoire d’expérimentations plus ou moins indépendantes l’une de l’autre.

Lorsque je me pose des questions sur l’intelligence collective, je retourne vers l’intelligence individuelle et celle qui est la plus proche est la mienne. Je viens de me rendre compte que pendant les deux dernières années, j’ai radicalement changé mon mode de vie, pour un mode nettement plus éco qu’auparavant, seulement le truc marrant c’est que je n’avais pas du tout l’intention de devenir plus éco. Ça avait tout commencé avec le vélo. J’avais commencé à circuler à travers la ville à vélo. Très vite, j’ai découvert ce sens spécial de liberté que le vélo donne dans l’environnement urbain. Mon cerveau a commencé à associer la voiture avec une claustration forcée plutôt qu’avec la liberté de déplacement. Bientôt, j’avais commencé à me rendre à vélo à mon lieu de travail – l’université – quelques 10 km de mon domicile. Ma bagnole, elle passait de plus en plus de temps garée à côté de la maison.

L’hiver dernier était ce que les hivers sont devenus, donc une sorte d’automne un peu froid. Voilà que j’ai découvert que rouler à vélo par un temps comme ça, lorsque la température est à peine au-dessus de zéro, donne une injection folle d’endorphines. C’était carrément enivrant et je peux vous dire qu’à la cinquantaine, faire 20 km aller-retour à vélo et se sentir bien après, c’est une découverte en soi. Comme je prenais de plus en plus l’habitude du vélo, je m’étais rendu compte que mon style de vie change. Lorsque je faisais mes courses, sur le chemin de retour de la fac, j’achetais ce que je pouvais transporter dans les sacoches de derrière de mon vélo plus ce que je pouvais fourrer dans mon sac à dos, où je transporte ma tenue de travail : veste, chemise, pantalons de ville. Le vélo m’avait obligé à économiser sur le volume de mes course quotidiennes et le truc intéressant est que ce volume réduit était tout à fait suffisant. Je me suis rendu compte qu’une partie substantielle de ce que j’achète en me déplaçant en voiture, eh bien, je l’achète juste parce que je peux (j’ai de l’espace cargo disponible) et non parce que j’en ai vraiment besoin.

J’ai fait mes calculs. J’ai utilisé la page https://www.carbonfootprint.com pour calculer les émissions de CO2 de ma voiture et voilà : une journée de déplacement à vélo, avec mon Honda Civic m’attendant gentiment à la maison, se traduit en des économies de 4,5 kilogrammes de dioxyde de carbone. Selon les données de la Banque Mondiale[1], en 2014, chez moi, en Pologne, les émissions de CO2 par tête d’habitant étaient de 7,5 tonnes par an, contre une moyenne mondiale de 4,97 tonnes par an. Le transport correspond à environ 20%[2] de ces émissions, donc à 1,5 tonnes par an, soit 4,1 kilogrammes par jour en moyenne. Ces 4,5 kilo de CO2 par jour, ça a donc l’air cohérent avec le style de vie d’un Polonais moyen.

Mes économies sur les courses journalières, lorsque je pédale, ça fait à peu de choses près €30 par semaine. En utilisant encore une fois la page https://www.carbonfootprint.com je l’ai recalculé en 4,5 kilogrammes de CO2 économisés par jour. Ça alors ! De tout en tout, une journée à vélo, dans mon contexte social précis, semble correspondre à quelques 9 kilogrammes de CO2 de moins, par rapport à la même journée en bagnole. Les moins ont des plus, remarquez. Lorsque je pédale, j’amortis physiquement ma bicyclette. Chaque kilomètre me rapproche du moment de la révision annuelle aussi bien que du moment où il sera nécessaire de changer de vélo ou bien de rénover radicalement celui que j’ai maintenant (Gazelle Chamonix C-7). J’ai utilisé les calculs présentés à la page https://momentummag.com/how-green-is-your-bicycle-manufacturing/ plus la calculatrice de conversion des kilojoules d’énergie en du CO2 émis et ça a donné 150 grammes de CO2 par jour en équivalent d’amortissement physique de ma bicyclette.

De tout en tout, une journée ouvrable passée en mode vélo correspond, dans mon style de vie individuel, à une réduction nette d’émissions d’environ 9 – 0,15 = 8,85 kg de CO2. J’ai récréé mon agenda de l’année 2018 et ça a donné quelques 130 jours ouvrables lorsque je remplaçais la voiture avec le vélo. Remarquez, lorsque le temps devient suffisamment hivernal pour qu’il y ait une couche de vieille neige ou du verglas sur les sentiers cyclistes, je me rends. Je me suis déjà cassé la gueule quelques fois dans des conditions comme ça et j’ai appris que le vélo a ses limites. Quoi qu’il en soit, les 130 jours en 2018 correspondent à une réduction individuelle d’émissions de CO2 équivalente à environ 1,15 tonnes, soit de 15,3% par rapport aux émissions annuelles moyennes par tête d’habitant en Pologne.

Voilà donc que j’ai changé de mode de transport et ceci m’a poussé à modifier mon style de consommation. De plus en plus éco à chaque pas. Seulement, ce n’était pas mon but. Ça avait tout commencé parce que je voulais me déplacer d’une façon plus confortable et j’en avais marre de passer du temps dans les embouteillages. Très honnêtement, je ne pensais pas beaucoup à l’environnement. J’étais très loin du type Capitaine Planète. Bien sûr, je savais qu’en laissant ma voiture roupiller paisiblement chez moi, j’économise du carburant, mais c’étaient des pensées vagues. Ça s’était passé tout seul. Chaque petit changement en entraînait un autre, comme je recevais des récompenses momentanées. Aucune privation consciente. C’était une revisite du côté de chez Adam Smith : en suivant des fins égoïstes j’avais accompli un changement favorable à l’environnement.

Mon environnement m’a offert des stimuli pour changer mon style de vie. Imaginons des milliers de personnes comme moi. Des petites découvertes quotidiennes, des petits changements personnels suivis par des récompenses immédiates : l’environnement urbain donné offre un ensemble fini de telles récompenses. Eh bien, oui, c’est fini en volume, ces récompenses. Si dès maintenant 50 000 personnes dans ma ville (Krakow, Pologne) font le même changement que moi j’avais fait, les sentiers cyclistes seront complètement bouchés et les récompenses, ça va devenir beaucoup plus problématique. Au moment donné, la ville relâche un nuage diffus et néanmoins fini en volume des récompenses comportementales qu’un certain nombre de cyclistes peut absorber et ça provoque un changement de style de vie.

J’essaie d’être plus précis. La population officielle de la ville de Krakow c’est environ 800 000 personnes. Avec les immigrés non-registrés comme résidents permanents ainsi qu’avec les migrants journaliers qui viennent des localités satellites, comme moi je le fais, j’estime la population totale réelle de ma ville bien aimée à quelques 1 200 000 personnes. Cette population coexiste avec environ 230 km des sentiers cyclistes ainsi qu’avec une flotte automobile (toutes catégories prises ensemble) de 570 000 à peu près. Chaque addition à la flotte automobile crée un renforcement négatif en ce qui concerne l’utilisation individuelle de la voiture et en même temps un renforcement positif indirect pour penser à quelque chose d’autre. Chaque addition à la longueur totale des sentiers cyclistes produit du renforcement positif en faveur de circulation à vélo. En termes de production de ces stimuli, la ville de Krakow avait produit, durant la période de 2011 à 2018, 122 kilomètres additionnels des sentiers cyclistes et une flotte additionnelle d’environ 115 000 automobiles. Cette combinaison des renforcements négatifs vis-à-vis de la voiture et positifs vis-à-vis de la bicyclette. Résultat : en 2016, selon les données du Conseil Municipal, environ 90 000 personnes utilisaient le vélo comme moyen de transport plus ou moins régulier et l’année dernière, en 2018, le chiffre pouvait même atteindre 200 000 personnes.

Plusieurs fois dans ma vie, j’ai eu cette impression étrange que les grandes villes sont comme des organismes vivants. Ce sentiment devient particulièrement vivace lorsque j’ai l’occasion d’observer une grande ville la nuit, ou même mieux, à l’aube, à partir d’un point d’observation élevé. En 2013, j’ai eu l’occasion de contempler de cette façon le panorama de Madrid, lorsque la ville se réveillait. L’impression que je vois une énorme bête qui s’étire et dont le sang (le flux de trafic routier) commence à couler plus vite dans les veines était si poignante que j’avais presque envie de tendre la main et de caresser la crinière du géant, faite d’un alignement des hauts immeubles. Une ville relâche donc un flux des stimulants : plus d’automobiles dans les rues et donc plus de densité de trafic accompagnés de plus des sentiers cyclistes et donc plus de confort de déplacement à vélo. Remarquez : une géographie concrète de trafic routier et des sentiers cyclistes, en vue d’oiseau et aussi en une vue mathématique probabiliste, c’est comme un nuage d’infrastructure qui se superpose à un nuage des personnes en mouvement.

Les habitants répondent sélectivement à ce flux des stimulants en accomplissant un changement progressif dans leurs styles de vie. Voilà donc qu’une fois de plus je réfléchis sur le concept d’intelligence collective et je suis de plus en plus enclin à la définir selon les grandes lignes de la théorie d’essaim. Consultez « Ensuite, mon perceptron réfléchit » ou bien « Joseph et le perceptron » pour en savoir plus sur cette théorie. Je définis donc l’intelligence collective comme l’action collective coordonnée par la production et dissémination d’un agent systémique similaire à une hormone, qui transmet l’information d’une façon semi-visée, où le destinataire de l’information est défini par la compatibilité de ses facultés perceptives avec les propriétés de l’agent systémique-même. Tout membre de la société qui possède les caractéristiques requises peut « lire » l’information transmise par l’agent systémique. Les marchés financiers me viennent à l’esprit comme l’exemple les plus illustratif d’un tel mécanisme, mais nous pouvons chercher cette composante « hormonale » dans tout comportement social. Tenez, le mariage. Dans notre comportement conjugal il peut y avoir des composantes – des petites séquences comportementales récurrentes – dont la fonction est de communiquer quelque chose à notre environnement social au sens large et ainsi provoquer certains comportements chez des personnes dont nous ne savons rien.

Je reviens vers de sujets un peu moins compliqués que le mariage, donc vers le marché de l’énergie. Je me dis que si je veux étudier ce marché comme un cas d’intelligence collective, il faut que j’identifie un ou plusieurs agents systémiques. L’argent et les instruments financiers sont, une fois de plus, des candidats évidents. Il peut y en avoir d’autres. Voilà que je peux esquisser l’utilité pratique de ma recherche sur l’application de l’intelligence artificielle pour simuler l’intelligence collective. Le truc le plus évident qui me vient à l’esprit c’est la simulation des politiques climatiques. Tenez, par exemple l’idée de ces chercheurs des États-Unis, surtout du côté de Stanford University, en ce qui concerne une capture profitable du carbone (Sanchez et al. 2018[3] ; Jackson et al. 2019[4]). Jackson et al prennent un angle original. Ils assument que l’humanité produit du dioxyde de carbone et du méthane, qui sont tous les deux des gaz à effet de serre, seulement le méthane, ça serre 84 fois plus que le dioxyde de carbone. Si on convertit le méthane en dioxyde de carbone, on change un agent nocif plus puissant en un agent beaucoup plus faible. Toujours ça de gagné et en plus, Jackson et al déclarent d’avoir mis au point une méthode profitable de capter le méthane produit dans l’élevage des bovins et le transformer en dioxyde de carbone, à travers l’utilisation de la zéolithe. La zéolithe est une structure cristalline rigide d’aluminosilicate, avec des cations et des molécules d’eau dans les espaces libres. Le méthane généré dans l’élevage est pompé, à travers un système des ventilateurs et des grandes plaques poreuses de zéolithe. La zéolithe agit comme un filtre, qui « casse » les molécules de méthane des molécules de dioxyde de carbone.

Jackson et al suggèrent que leur méthode peut être exploitée à profit. Il y a un petit « mais » : à profit veut dire « à condition » est la condition c’est un marché des compensations carbone où le prix d’une tonne serait d’au moins $500. Je jette un coup d’œil sur le marché des compensations carbone tel qu’il est maintenant, selon le rapport publié par la Banque Mondiale : « State and Trends of Carbon Pricing 2018 ». Le marché se développe assez vite. En 2005, toutes les initiatives des compensations carbone dans le monde correspondaient à environ 4% de l’émission totale des gaz de serre. En 2018, ça faisait déjà quelques 14%, avec près de 20% à espérer en 2020. Seulement côté prix, le max des max, soit l’impôt Suédois sur les émissions, ça faisait $139 par tonne. La médiane des prix semble être entre $20 et $25. Très loin des $500 par tonne dont la méthode de Jackson et al a besoin pour être profitable.

Sanchez et al (2018) prennent une approche différente. Ils se concentrent sur des technologies – ou plutôt des ensembles complexes des technologies dans des industries mutuellement intégrées – qui rendent possible la vente du CO2 produit dans l’une de ces industries à l’autre. Le marché industriel du dioxyde de carbone – par exemple dans la production de la bière – est estimé à quelques 80 tonnes par an de CO2 liquide. Pas vraiment énorme – une centaine des cyclistes reconvertis comme moi font l’affaire – mais c’est toujours quelque chose de gagné.           

Ces idées que je viens de mentionner peuvent un jour se composer en des politiques publiques et alors il sera question de leur efficacité tout comme à présent nous nous posons des questions sur l’efficacité des soi-disant « politiques climatiques ». Vue mathématiquement, toute politique est un ensemble des variables, structurées en des résultats espérés d’une part et les outils ainsi que des déterminantes externes d’autre part. Cette perspective rend possible l’expression des politiques comme algorithmes d’intelligence artificielle. Les résultats c’est ce que nous voulons avoir. Disons que ce que nous voulons est une efficience énergétique « EE » – donc le coefficient du PIB divisé par la quantité d’énergie consommée – plus grande de 20% du niveau présent. Nous savons qu’EE dépend d’un ensemble de « » facteurs, dont nous contrôlons certains pendant qu’il est raisonnable d’en considérer d’autres comme exogènes.

J’ai donc une équation dans le style : EE = f(x1, x2, …, xn). Dans ce que nous pouvons appeler calcul stochastique classique il est question de chercher une expression linéaire la plus précise possible de la fonction f(x1, x2, …, xn), soit quelque chose comme EE = a1*x1 + a2*x2 + … + an*xn. Cette approche sert à déterminer quelle serait la valeur la plus probable d’EE avec un vecteur donné des conditions (x1, x2, …, xn). Cette tendance centrale est basée sur la loi de Vue sous un autre angle, la même politique peut s’exprimer comme un ensemble de plusieurs états hypothétiques et équiprobables de nature, donc plusieurs configurations probables de (x1, x2, …, xn) qui pourraient accompagner cette efficience énergétique désirée d’EE(t1) = 1,2*EE(t0). C’est alors que l’intelligence artificielle peut servir (consultez, par exemple « Existence intelligente et pas tout à fait rationnelle »)

Je me demande comment interpréter ces phénomènes et mon esprit s’aventure dans une région adjacente : la bouffe. Pardon, je voulais dire : l’agriculture. Il y a une différence nette entre l’Europe Septentrionale et l’Europe Méridionale, en ce qui concerne l’agriculture. Par l’Europe Méridionale je comprends surtout les grandes péninsules méditerranéennes : l’Ibérique, l’Apennine et le Péloponnèse. L’Europe du Nord, c’est tout ce qui se trouve plus loin de la Méditerranée. Dans le Sud, il y a beaucoup moins de production animale et la production végétale est centrée sur les fruits, avec relativement peu de plantes céréalières et peu des légumes-racines (pommes de terre, betteraves etc.). Dans le Nord de l’Europe, c’est presque exactement l’inverse : l’agriculture est dominée par les céréales, les légumes-racine et la production animale.

Les céréales et les légumes-racines, ça pousse vite. Je peux décider pratiquement d’année en année de l’utilisation exacte d’un champ donné. Les betteraves ou le blé, je peux les déplacer d’un champ à l’autre, d’année en année, presque sans encombre. Qui plus est, dans l’agriculture européenne traditionnelle du Nord, c’est ce qu’on était supposé de faire : de la rotation des cultures, appelée aussi « système d’assolement ». En revanche, les arbres fruitiers, ça pousse lentement. Il faut attendre des années avant qu’une plantation nouvelle soit mûre pour la production. Il est hors de question de déplacer des plantations fruitières d’une saison agriculturale à l’autre. Le modèle du Nord donne donc plus de flexibilité en termes d’aménagement du sol arable. Cette flexibilité va plus loin. La récolte des céréales, ça peut se diviser d’une façon élastique entre plusieurs applications : tant pour la consommation courante humaine, tant pour consommation humaine future, tant pour le fourrage et tant pour le semis l’année prochaine. Pour les légumes-racines, c’est un peu plus compliqué. Pour les patates, la meilleure solution c’est de replanter une pomme de terre déjà récoltée : elle sera plus prévisible.

Pour les carottes, il faut récolter les graines séparément et les replanter après. En tout, les cultures végétales du Nord, ça se conserve bien et ça se rend à des utilisations multiples.

En revanche, dans le Sud et ses cultures fruitières dominantes, c’est différent. Les fruits, avec l’exception des très succulents – comme les citrouilles ou les courges – ça se conserve mal hors d’une chambre froide et c’est l’une des raisons pourquoi il est problématique de nourrir des animaux de ferme avec. Voilà le point suivant : le Nord de l’Europe, ça abonde en élevage animal et donc en protéines et graisses animales. Tous les deux sont très nutritifs et en plus, la graisse animale, ça conserve bien les protéines animales. Eh oui, c’est la raison d’être du saucisson : les acides gras saturés, puisqu’ils sont saturés et donc dépourvus des liens chimiques libres, fonctionnent comme un ralentisseur des réactions chimiques. Un saucisson c’est de la viande (protéines) enveloppée dans de la graisse animale, qui empêche lesdites protéines de s’engager dans des liaisons douteuses avec l’oxygène.

En plus des protéines et de la graisse, les animaux de ferme, ils chient partout et donc ils engraissent. Les bactéries intestinales de la vache, ainsi que ses enzymes digestifs, travaillent pour le bien commun de la vache, de l’agriculteur et des cultures végétales. Une betterave moyenne, ça a tout intérêt à vivre à proximité d’une vache plutôt que de choisir une carrière solo. Voilà donc une chaîne intéressante : l’agriculture végétale dominée par les céréales et les légumes-racines favorise l’agriculture animale poussée qui, à son tour, favorise des cultures végétales à croissance rapide et à hautes exigences nutritives en termes de sol, donc des céréales et des légumes-racines etc. L’agriculture végétale du Sud, dominée par les arbres fruitiers, reste largement indépendante de l’agriculture animale. Cette dernière, dans le Sud, se concentre sur les chèvres et les moutons, qui ont besoin surtout des pâturages naturels.

En termes de productivité nutritive, le modèle du Nord bat celui du Sud par plusieurs longueurs. Ces deux modèles différents sont liés à deux géographies différentes. Le Nord de l’Europe est plus plat, plus froid, plus humide et doté des sols plus riches que le Sud. Plus de bouffe veut dire plus de monde par kilomètre carré, plus d’industrie, plus de trafic routier et tout ça, pris ensemble avec l’élevage intensif, veut dire plus de pollution par nitrogène. Cette dernière a une propriété intéressante : elle agit comme de l’engraissage permanent. Comme la pollution par nitrogène n’est pas vraiment contrôlée, cet engraissage involontaire va surtout aux espèces végétales qui ont le plus de potentiel de captage : les arbres. Récemment, j’ai eu une discussion avec un chercheur de l’Université Agriculturale de Krakow, Pologne, qui m’a carrément assommé avec le fait suivant : dû à la pollution par nitrogène, en Pologne, on a chaque année un surplus d’environ 30 millions de mètres cubes d’arbres vivants et on ne sait pas vraiment quoi en faire. Comme nous avons des sécheresses épisodiques de plus en plus fréquentes, ce surplus d’arbres a un effet pervers : les arbres sont aussi les plus efficaces à capter l’eau et durant une sécheresse ils battent toutes les autres plantes à cette discipline.  

Le système agricultural du Nord, à travers une chaîne causale étrange, contribue à reconstruire ce que le Nord a toujours eu tendance à surexploiter : les forêts. Une hypothèse folle germe dans mon esprit. Durant le XVIIIème et la première moitié du XIXème siècle, nos ancêtres Européens avaient gravement épuisé la substance forestière du continent. À partir de la seconde moitié du XIXème siècle, ils avaient commencé à exploiter de plus en plus les carburants fossiles et donc à produire de plus en plus de pollution locale en dioxyde de nitrogène. Par conséquent, ils avaient entamé un processus qui, des décennies plus tard, contribue à reconstruire la masse forestière du continent. Est-il concevable que notre aventure avec les carburants fossiles est une action collectivement intelligente visant à reconstruire les forêts ? Fou, n’est-ce pas ? Oui, bien sûr, par la même occasion, nous avons pompé des tonnes de carbone dans l’atmosphère de la planète, mais que puis-je vous dire : être intelligent ne veut pas nécessairement dire être vraiment prévoyant.

Quelles analogies entre ces modèles d’agriculture et les systèmes énergétiques, tels que je les ai passés en revue dans « Lean, climbing trends » ? Dans les deux cas, il y a une composante de croissance plus ou moins stable – plus de kilocalories par jour par personne, ainsi que plus de personnes qui mangent à leur faim dans le cas de l’agriculture, plus de kilogrammes d’équivalent pétrole par année par personne et plus de personnes avec accès à l’électricité dans le cas de l’énergie – accompagnée par des ensembles hétérogènes d’essais et erreurs côté production. Ces essais et erreurs semblent partager une caractéristique commune : ils forment des bases productives complexes. Un système énergétique concentré exclusivement sur une seule source d’énergie, par exemple que du photovoltaïque, semble tout aussi déséquilibré qu’un système agricultural qui ne cultive qu’une seule espèce végétale ou animale, comme que du mouton ou que du maïs. 

Je continue à vous fournir de la bonne science, presque neuve, juste un peu cabossée dans le processus de conception. Je vous rappelle que vous pouvez télécharger le business plan du projet BeFund (aussi accessible en version anglaise). Vous pouvez aussi télécharger mon livre intitulé “Capitalism and Political Power”. Je veux utiliser le financement participatif pour me donner une assise financière dans cet effort. Vous pouvez soutenir financièrement ma recherche, selon votre meilleur jugement, à travers mon compte PayPal. Vous pouvez aussi vous enregistrer comme mon patron sur mon compte Patreon . Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon travail ? Vous pouvez me contacter à travers la boîte électronique de ce blog : goodscience@discoversocialsciences.com .


[1] https://data.worldbank.org/indicator/en.atm.co2e.pc dernier accès 26 Mars 2019

[2] https://ourworldindata.org/co2-and-other-greenhouse-gas-emissions dernier accès 26 Mars 2019

[3] Sanchez, D. L., Johnson, N., McCoy, S. T., Turner, P. A., & Mach, K. J. (2018). Near-term deployment of carbon capture and sequestration from biorefineries in the United States. Proceedings of the National Academy of Sciences, 115(19), 4875-4880.

[4] R. B. Jackson et al. Methane removal and atmospheric restoration, Nature Sustainability (2019). DOI: 10.1038/s41893-019-0299-x

Lean, climbing trends

My editorial on You Tube

Our artificial intelligence: the working title of my research, for now. Volume 1: Energy and technological change. I am doing a little bit of rummaging in available data, just to make sure I keep contact with reality. Here comes a metric: access to electricity in the world, measured as the % of total human population[1]. The trend line looks proudly ascending. In 2016, 87,38% of mankind had at least one electric socket in their place. Ten years earlier, by the end of 2006, they were 81,2%. Optimistic. Looks like something growing almost linearly. Another one: « Electric power transmission and distribution losses »[2]. This one looks different: instead of a clear trend, I observe something shaking and oscillating, with the width of variance narrowing gently down, as time passes. By the end of 2014 (last data point in this dataset), we were globally at 8,25% of electricity lost in transmission. The lowest coefficient of loss occurred in 1998: 7,13%.

I move from distribution to production of electricity, and to its percentage supplied from nuclear power plants[3]. Still another shape, that of a steep bell with surprisingly lean edges. Initially, it was around 2% of global electricity supplied by the nuclear. At the peak of fascination, it was 17,6%, and at the end of 2014, we went down to 10,6%. The thing seems to be temporarily stable at this level. As I move to water, and to the percentage of electricity derived from the hydro[4], I see another type of change: a deeply serrated, generally descending trend. In 1971, we had 20,2% of our total global electricity from the hydro, and by the end of 2014, we were at 16,24%. In the meantime, it looked like a rollercoaster. Yet, as I am having a look at other renewables (i.e. other than hydroelectricity) and their share in the total supply of electricity[5], the shape of the corresponding curve looks like a snake, trying to figure something out about a vertical wall. Between 1971 and 1988, the share of those other renewables in the total electricity supplied moved from 0,25% to 0,6%. Starting from 1989, it is an almost perfectly exponential growth, to reach 6,77% in 2015. 

Just to have a complete picture, I shift slightly, from electricity to energy consumption as a whole, and I check the global share of renewables therein[6]. Surprise! This curve does not behave at all as it is expected to behave, after having seen the previously cited share of renewables in electricity. Instead of a snake sniffing a wall, we can see a snake like from above, or something like e meandering river. This seems to be a cycle over some 25 years (could it be Kondratiev’s?), with a peak around 18% of renewables in the total consumption of energy, and a trough somewhere by 16,9%. Right now, we seem to be close to the peak. 

I am having a look at the big, ugly brother of hydro: the oil, gas and coal sources of electricity and their share in the total amount of electricity produced[7]. Here, I observe a different shape of change. Between 1971 and 1986, the fossils dropped their share from 62% to 51,47%. Then, it rockets up back to 62% in 1990. Later, a slowly ascending trend starts, just to reach a peak, and oscillate for a while around some 65 ÷ 67% between 2007 and 2011. Since then, the fossils are dropping again: the short-term trend is descending.  

Finally, one of the basic metrics I have been using frequently in my research on energy: the final consumption thereof, per capita, measured in kilograms of oil equivalent[8]. Here, we are back in the world of relatively clear trends. This one is ascending, with some bumps on the way, though. In 1971, we were at 1336,2 koe per person per year. In 2014, it was 1920,655 koe.

Thus, what are all those curves telling me? I can see three clearly different patterns. The first is the ascending trend, observable in the access to electricity, in the consumption of energy per capita, and, since the late 1980ies, in the share of electricity derived from renewable sources. The second is a cyclical variation: share of renewables in the overall consumption of energy, to some extent the relative importance of hydroelectricity, as well as that of the nuclear. Finally, I can observe a descending trend in the relative importance of the nuclear since 1988, as well as in some episodes from the life of hydroelectricity, coal and oil.

On the top of that, I can distinguish different patterns in, respectively, the production of energy, on the one hand, and its consumption, on the other hand. The former seems to change along relatively predictable, long-term paths. The latter looks like a set of parallel, and partly independent experiments with different sources of energy. We are collectively intelligent: I deeply believe that. I mean, I hope. If bees and ants can be collectively smarter than singlehandedly, there is some potential in us as well.

Thus, I am progressively designing a collective intelligence, which experiments with various sources of energy, just to produce those two, relatively lean, climbing trends: more energy per capita and ever growing a percentage of capitae with access to electricity. Which combinations of variables can produce a rationally desired energy efficiency? How is the supply of money changing as we reach different levels of energy efficiency? Can artificial intelligence make energy policies? Empirical check: take a real energy policy and build a neural network which reflects the logical structure of that policy. Then add a method of learning and see, what it produces as hypothetical outcome.

What is the cognitive value of hypotheses made with a neural network? The answer to this question starts with another question: how do hypotheses made with a neural network differ from any other set of hypotheses? The hypothetical states of nature produced by a neural network reflect the outcomes of logically structured learning. The process of learning should represent real social change and real collective intelligence. There are four most important distinctions I have observed so far, in this respect: a) awareness of internal cohesion b) internal competition c) relative resistance to new information and d) perceptual selection (different ways of standardizing input data).

The awareness of internal cohesion, in a neural network, is a function that feeds into the consecutive experimental rounds of learning the information on relative cohesion (Euclidean distance) between variables. We assume that each variable used in the neural network reflects a sequence of collective decisions in the corresponding social structure. Cohesion between variables represents the functional connection between sequences of collective decisions. Awareness of internal cohesion, as a logical attribute of a neural network, corresponds to situations when societies are aware of how mutually coherent their different collective decisions are. The lack of logical feedback on internal cohesion represents situation when societies do not have that internal awareness.

As I metaphorically look around and ask myself, what awareness do I have about important collective decisions in my local society. I can observe and pattern people’s behaviour, for one. Next thing: I can read (very literally) the formalized, official information regarding legal issues. On the top of that, I can study (read, mostly) quantitatively formalized information on measurable attributes of the society, such as GDP per capita, supply of money, or emissions of CO2. Finally, I can have that semi-formalized information from what we call “media”, whatever prefix they come with: mainstream media, social media, rebel media, the-only-true-media etc.

As I look back upon my own life and the changes which I have observed on those four levels of social awareness, the fourth one, namely the media, has been, and still is the biggest game changer. I remember the cultural earthquake in 1990 and later, when, after decades of state-controlled media in the communist Poland, we suddenly had free press and complete freedom of publishing. Man! It was like one of those moments when you step out of a calm, dark alleyway right into the middle of heavy traffic in the street. Information, it just wheezed past.         

There is something about media, both those called ‘mainstream’, and the modern platforms like Twitter or You Tube: they adapt to their audience, and the pace of that adaptation is accelerating. With Twitter, it is obvious: when I log into my account, I can see the Tweets only from people and organizations whom I specifically subscribed to observe. With You Tube, on my starting page, I can see the subscribed channels, for one, and a ton of videos suggested by artificial intelligence on the grounds of what I watched in the past. Still, the mainstream media go down the same avenue. When I go bbc.com, the types of news presented are very largely what the editorial team hopes will max out on clicks per hour, which, in turn, is based on the types of news that totalled the most clicks in the past. The same was true for printed newspapers, 20 years ago: the stuff that got to headlines was the kind of stuff that made sales.

Thus, when I simulate collective intelligence of a society with a neural network, the function allowing the network to observe its own, internal cohesion seems to be akin the presence of media platforms. Actually, I have already observed, many times, that adding this specific function to a multi-layer perceptron (type of neural network) makes that perceptron less cohesive. Looks like a paradox: observing the relative cohesion between its own decisions makes a piece of AI less cohesive. Still, real life confirms that observation. Social media favour the phenomenon known as « echo chamber »: if I want, I can expose myself only to the information that minimizes my cognitive dissonance and cut myself from anything that pumps my adrenaline up. On a large scale, this behavioural pattern produces a galaxy of relatively small groups encapsulated in highly distilled, mutually incoherent worldviews. Have you ever wondered what it would be to use GPS navigation to find your way, in the company of a hardcore flat-Earther?   

When I run my perceptron over samples of data regarding the energy – efficiency of national economies – including the function of feedback on the so-called fitness function is largely equivalent to simulating a society with abundant mediatic activity. The absence of such feedback is, on the other hand, like a society without much of a media sector.

Internal competition, in a neural network, is the deep underlying principle for structuring a multi-layer perceptron into separate layers, and manipulating the number of neurons in each layer. Let’s suppose I have two neural layers in a perceptron: A, and B, in this exact order. If I put three neurons in the layer A, and one neuron in the layer B, the one in B will be able to choose between the 3 signals sent from the layer A. Seen from the A perspective, each neuron in A has to compete against the two others for the attention of the single neuron in B. Choice on one end of a synapse equals competition on the other end.

When I want to introduce choice in a neural network, I need to introduce internal competition as well. If any neuron is to have a choice between processing input A and its rival, input B, there must be at least two distinct neurons – A and B – in a functionally distinct, preceding neural layer. In a collective intelligence, choice requires competition, and there seems to be no way around it.  In a real brain, neurons form synaptic sequences, which means that the great majority of our neurons fire because other neurons have fired beforehand. We very largely think because we think, not because something really happens out there. Neurons in charge of early-stage collection in sensory data compete for the attention of our brain stem, which, in turn, proposes its pre-selected information to the limbic system, and the emotional exultation of the latter incites he cortical areas to think about the whole thing. From there, further cortical activity happens just because other cortical activity has been happening so far.

I propose you a quick self-check: think about what you are thinking right now, and ask yourself, how much of what you are thinking about is really connected to what is happening around you. Are you thinking a lot about the gradient of temperature close to your skin? No, not really? Really? Are you giving a lot of conscious attention to the chemical composition of the surface you are touching right now with your fingertips? Not really a lot of conscious thinking about this one either? Now, how much conscious attention are you devoting to what [fill in the blank] said about [fill in the blank], yesterday? Quite a lot of attention, isn’t it?

The point is that some ideas die out, in us, quickly and sort of silently, whilst others are tough survivors and keep popping up to the surface of our awareness. Why? How does it happen? What if there is some kind of competition between synaptic paths? Thoughts, or components thereof, that win one stage of the competition pass to the next, where they compete again.           

Internal competition requires complexity. There needs to be something to compete for, a next step in the chain of thinking. A neural network with internal competition reflects a collective intelligence with internal hierarchies that offer rewards. Interestingly, there is research showing that greater complexity gives more optimizing accuracy to a neural network, but just as long as we are talking about really low complexity, like 3 layers of neurons instead of two. As complexity is further developed, accuracy decreases noticeably. Complexity is not the best solution for optimization: see Olawoyin and Chen (2018[9]).

Relative resistance to new information corresponds to the way that an intelligent structure deals with cognitive dissonance. In order to have any cognitive dissonance whatsoever, we need at least two pieces of information: one that we have already appropriated as our knowledge, and the new stuff, which could possibly disturb the placid self-satisfaction of the I-already-know-how-things-work. Cognitive dissonance is a potent factor of stress in human beings as individuals, and in whole societies. Galileo would have a few words to say about it. Question: how to represent in a mathematical form the stress connected to cognitive dissonance? My provisional answer is: by division. Cognitive dissonance means that I consider my acquired knowledge as more valuable than new information. If I want to decrease the importance of B in relation to A, I divide B by a factor greater than 1, whilst leaving A as it is. The denominator of new information is supposed to grow over time: I am more resistant to the really new stuff than I am to the already slightly processed information, which was new yesterday. In a more elaborate form, I can use the exponential progression (see The really textbook-textbook exponential growth).

I noticed an interesting property of the neural network I use for studying energy efficiency. When I introduce choice, internal competition and hierarchy between neurons, the perceptron gets sort of wild: it produces increasing error instead of decreasing error, so it basically learns how to swing more between possible states, rather than how to narrow its own trial and error down to one recurrent state. When I add a pinchful of resistance to new information, i.e. when I purposefully create stress in the presence of cognitive dissonance, the perceptron calms down a bit, and can produce a decreasing error.   

Selection of information can occur already at the level of primary perception. I developed on this one in « Thinking Poisson, or ‘WTF are the other folks doing?’ ». Let’s suppose that new science comes as for how to use particular sources of energy. We can imagine two scenarios of reaction to that new science. On the one hand, the society can react in a perfectly flexible way, i.e. each new piece of scientific research gets evaluated as for its real utility for energy management, and gest smoothly included into the existing body of technologies. On the other hand, the same society (well, not quite the same, an alternative one) can sharply distinguish those new pieces of science into ‘useful stuff’ and ‘crap’, with little nuance in between.

What do we know about collective learning and collective intelligence? Three essential traits come to my mind. Firstly, we make social structures, i.e. recurrent combinations of social relations, and those structures tend to be quite stable. We like having stable social structures. We almost instinctively create rituals, rules of conduct, enforceable contracts etc., thus we make stuff that is supposed to make the existing stuff last. An unstable social structure is prone to wars, coups etc. Our collective intelligence values stability. Still, stability is not the same as perfect conservatism: our societies have imperfect recall. This is the second important trait. Over (long periods of) time we collectively shake off, and replace old rules of social games with new rules, and we do it without disturbing the fundamental social structure. In other words: stable as they are, our social structures have mechanisms of adaptation to new conditions, and yet those mechanisms require to forget something about our past. OK, not just forget something: we collectively forget a shitload of something. Thirdly, there had been many local human civilisations, and each of them had eventually collapsed, i.e. their fundamental social structures had disintegrated. The civilisations we have made so far had a limited capacity to learn. Sooner or later, they would bump against a challenge which they were unable to adapt to. The mechanism of collective forgetting and shaking off, in every known historically documented case, had a limited efficiency.

I intuitively guess that simulating collective intelligence with artificial intelligence is likely to be the most fruitful when we simulate various capacities to learn. I think we can model something like a perfectly adaptable collective intelligence, i.e. the one which has no cognitive dissonance and processes information uniformly over time, whilst having a broad range of choice and internal competition. Such a neural network behaves in the opposite way to what we tend to associate with AI: instead of optimizing and narrowing down the margin of error, it creates new alternative states, possibly in a broadening range. This is a collective intelligence with lots of capacity to learn, but little capacity to steady itself as a social structure. From there, I can muzzle the collective intelligence with various types of stabilizing devices, making it progressively more and more structure-making, and less flexible. Down that avenue, the solver-type of artificial intelligence lies, thus a neural network that just solves a problem, with one, temporarily optimal solution.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] https://data.worldbank.org/indicator/EG.ELC.ACCS.ZS last access May 17th, 2019

[2] https://data.worldbank.org/indicator/EG.ELC.LOSS.ZS?end=2016&start=1990&type=points&view=chart last access May 17th, 2019

[3] https://data.worldbank.org/indicator/EG.ELC.NUCL.ZS?end=2014&start=1960&type=points&view=chart last access May 17th, 2019

[4] https://data.worldbank.org/indicator/EG.ELC.HYRO.ZS?end=2014&start=1960&type=points&view=chart last access May 17th, 2019

[5] https://data.worldbank.org/indicator/EG.ELC.RNWX.ZS?type=points last access May 17th, 2019

[6] https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS?type=points last access May 17th, 2019

[7] https://data.worldbank.org/indicator/EG.ELC.FOSL.ZS?end=2014&start=1960&type=points&view=chart last access May 17th, 2019

[8] https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE?type=points last access May 17th, 2019

[9] Olawoyin, A., & Chen, Y. (2018). Predicting the Future with Artificial Neural Network. Procedia Computer Science, 140, 383-392.

Existence intelligente et pas tout à fait rationnelle

Mon éditorial sur You Tube

Je continue avec le sujet de l’intelligence artificielle. Je développe sur le contenu de ma dernière mise à jour en anglais : « Thinking Poisson, or ‘WTF are the other folks doing?’ ». Je veux bâtir un raisonnement cohérent en ce qui concerne le bien-fondé et la méthode d’utiliser un réseau neuronal comme outil de prédiction dans les sciences sociales. Je sens que pour le faire j’ai besoin de prendre du recul et d’articuler clairement les sources de ma fascination avec les réseaux neuronaux. Je me souviens la première fois que j’avais utilisé, d’une façon encore très maladroite, un algorithme très simple de réseau neuronal (regardez « Ce petit train-train des petits signaux locaux d’inquiétude »). Ce qui m’avait fasciné, à l’époque, c’était la possibilité de regarder, de l’extérieur, une chose – une chose logique – apprendre. C’était comme si j’observais quelqu’un qui trouve son chemin à tâtons avec les yeux bandés, seulement ce quelqu’un était une séquence de 6 équations.

Il y a deux ans, j’ai présenté, dans une conférence, quelques preuves empiriques que la civilisation humaine a pour trait essentiel de maximiser l’absorption d’énergie de l’environnement. En fait, les changements technologiques de notre civilisation depuis 1960 ont pour effet d’accroître ladite absorption d’énergie. C’est l’un des sentiers intellectuels qui me passionnent. Lorsque je réfléchis sur les différentes manifestations de vie biologique, toute espèce maximise son absorption d’énergie. Nous, les humains, ne faisons pas exception à cette règle. Dans un autre article, j’ai présenté une application créative de la bonne vieille fonction de production – telle que vous pouvez la trouver dans l’article de Charles Cobb et Paul Douglas – au phénomène d’adaptation des sociétés humaines à leur environnements locaux, vu la quantité d’énergie et d’alimentation disponible. La conclusion générale que je tire de la recherche présentée dans ces deux articles est que l’existence des sociétés humaines est une histoire d’apprentissage intelligent, quoi qu’imparfaitement rationnel, à plusieurs niveaux. Pas vraiment original, vous direz. Oui, pas très original, mais ça donne de l’inspiration et ça excite ma curiosité.

Les histoires, ça se déroule. Je suis curieux où est-ce que cette existence intelligente et pas tout à fait rationnelle peut bien nous mener. C’est logique. Je suis chercheur dans les sciences de société et j’essaie de prédire, encore et encore, comme je reçois de l’information nouvelle, quelle forme va prendre la société dans l’avenir. Comment allons-nous adapter aux changements climatiques ? Comment pouvons-nous arrêter ou inverser ces changements ? Comment nous comporterons-nous, en Europe, si une pénurie alimentaire à l’échelle continentale survient ? Quelle va être la loi de demain ? Va-t-elle punir toute offense verbale à la sensibilité de quiconque ? La loi va-t-elle règlementer l’accès à l’eau potable ? Comment voterons-nous dans les élections parlementaires, dans 100 ans ? Y-aura-t-il des élections parlementaires ?

Autant des questions qui provoquent deux types d’attitude. « Qui sait ? Il y a tellement de variables en jeu qu’il est impossible de dire quoi que ce soit de ne serait-ce que moyennement raisonnable » est la première. « Qui sait ? Essayons de formuler des hypothèses, pour commencer. Les hypothèses, ça donne un point de départ. Ensuite, nous pouvons évaluer l’information nouvelle, que nous gagnerons dans l’avenir, en vue de ces hypothèses et comprendre un peu plus de ce qui se passe ». Ça, c’est la deuxième approche possible et moi, j’y souscris. Je suis chercheur, la science est ma passion, je suis curieux et je préfère savoir plutôt qu’ignorer.

Ça fait pratiquement un an que je m’efforce de mettre au point un concept d’entreprise financière que j’ai baptisé EneFin. En général, il s’agit de stimuler le développement des nouvelles sources d’énergie – surtout des petites installations locales basés sur les renouvelables – à travers un mécanisme financier qui combine une structure coopérative avec des solutions typiquement capitalistes, un peu comme dans le financement participatif type « crowdfunding ». Il y a quelque chose d’étrange dans cette idée, ou plutôt dans mes tentatives de la développer. À première vue, ça semble attrayant dans sa simplicité. Lorsque je m’y prends à décrire et développer cette idée, soit comme un business plan soit comme un article scientifique, je bute contre… Voilà, je ne sais pas exactement contre quoi. Il y a comme un blocage dans mon cerveau. Comme j’essaie de comprendre la nature de ce blocage, ça semble être quelque chose comme de la complexité résiduelle. C’est comme si une partie de mon intellect me disait, encore et encore : « Ce truc est plus complexe que tu crois. Tu n’as pas découvert toutes les cartes de ce jeu. Il est trop tôt pour présenter ça comme idée toute faite. Il faut que tu continues à chercher et découvrir, avant de présenter ».

EneFin est un concept essentiellement financier. La finance, ça tend à marcher en boucle de rétroaction : les phénomènes qui, juste un instant avant, étaient la cause et la force motrice de quelque chose, deviennent l’effet du même quelque chose. C’est l’une des raisons pourquoi les méthodes stochastiques classiques, comme la régression linéaire, donnent des résultats très insatisfaisants en ce qui concerne la prédiction des marchés financiers. La méthode stochastique a pour but de trouver une fonction mathématique qui donne une représentation mathématiquement cohérente des données empiriques – une fonction – avec aussi petite erreur type que possible. La prédiction strictement dite consiste à projeter cette fonction dans un futur possible et incertain. La qualité de prédiction se juge, en fait, après coup, donc lorsque le futur de jadis est devenu le passé, ne serait-ce qu’immédiat, du présent. Il y a une assomption profondément cachée dans cette méthode : c’est l’assomption que nous savons tout ce qu’il y a à savoir.

La méthode stochastique requiert de dire ouvertement que l’échantillon des données empiriques que j’utilise pour tracer une fonction est un échantillon représentatif. Suivant la logique de de Moivre – Laplace, mon échantillon a de la valeur stochastique seulement lorsque sa moyenne arithmétique est identique à celle de la moyenne à observer dans la réalité en général ou bien elle est suffisamment proche de cette moyenne réelle pour que la différence soit insignifiante. Dire que mon observation de la réalité est représentative de cette réalité, ça crée une perspective cognitive spéciale, ou je prétends de savoir tout ce qu’il est nécessaire de savoir sur le monde qui m’entoure.

Si vous travaillez sur un projet et quelqu’un vous dit « Va dans la direction A, je sais parfaitement que j’ai raison », vous répondrez, probablement, « Avec tout mon respect, non, tu ne peux pas savoir à coup sûr si tu as raison. La réalité, ça change et ça surprend ». Voilà le talon d’Achille de la méthode stochastique. Bien qu’officiellement différente du bon vieux déterminisme, elle en garde certaines caractéristiques. Avec tous ses avantages indéniables, elle est très exposée à l’erreur d’observation incomplète.

Il y a cette blague à propos des sciences économiques, qu’elles sont l’art de formuler des pronostics qui ne tiennent pas. Cruelle et exagérée, la blague, néanmoins fréquemment vraie. C’est probablement pour ça qu’un créneau légèrement différent s’est développé dans les sciences sociales, celui qui puise des sciences physiques et qui utilise des modèles théoriques comme le mouvement Brownien ou bien le mouvement d’Itô . Dans cette approche, la fonction des données empiriques inclue explicitement une composante de changement aléatoire.

Un réseau neuronal va dans une direction encore un peu différente. Au lieu d’assembler toutes les observations empiriques et en tirer une fonction commune, un réseau neuronal expérimente avec des petits sous-ensembles de l’échantillon complet. Après chaque expérience, le réseau teste sa capacité d’obtenir le résultat égal à une valeur de référence. Le résultat de ce test est ensuite utilisé comme information additionnelle dans des expériences ultérieures. L’intelligence artificielle connaît le succès qu’elle connaît parce que savons que certaines séquences des fonctions mathématiques ont la capacité d’optimiser des fonctions réelles, par exemple le fonctionnement d’un robot de nettoyage des planchers.

Si une séquence d’actions possède la capacité de s’optimiser elle-même, elle se comporte comme l’intelligence d’un organisme vivant : elle apprend. Voilà la méthode dont j’ai besoin pour travailler à fond mon idée de solution financière pour les énergies renouvelables. Le financier, ça contient des multiples boucles de rétroaction entre les variables en jeu, qui sont un gros problème pour les modèles stochastiques. Pour un réseau neuronal, les boucles de rétroaction, c’est précisément ce que l’intelligence artificielle du réseau est faite pour.

Par ailleurs, voilà que j’ai trouvé un article intéressant sur la méthodologie d’utilisation des réseaux neuronaux comme outils de prédiction alternatifs ou complémentaires vis-à-vis les modèles stochastiques. Olawoyin et Chen (2018[1]) discutent la valeur prédictive des plusieurs architectures possibles d’un perceptron à couches multiples. La valeur prédictive est évaluée en appliquant les perceptrons, d’une part, et un modèle ARIMA d’autre part à la prédiction des mêmes variables dans le même échantillon des données empiriques. Le perceptron à couches multiples se débrouille mieux que le modèle stochastique, quelles que soient les conditions exactes de l’expérience. Olawoyin et Chen trouvent deux trucs intéressants à propos de l’architecture du réseau neuronal. Premièrement, le perceptron basé sur la tangente hyperbolique comme fonction d’activation neuronale est généralement plus précis dans sa prédiction que celui basé sur la fonction sigmoïde. Deuxièmement, la multiplication des couches de neurones dans le perceptron ne se traduit pas directement en sa valeur prédictive. Chez Olawoyin et Chen, le réseau à 3 couches semble se débrouiller généralement mieux que celui à 4 couches.

Il est peut-être bon que j’explique cette histoire des couches. Dans un réseau neuronal artificiel, un neurone est une fonction mathématique avec une tâche précise à effectuer. Attribuer des coefficients aléatoires de pondération aux variables d’entrée est une fonction distincte du calcul de la variable de résultat à travers une fonction d’activation neuronale. J’ai donc deux neurones distincts : un qui attribue les coefficients aléatoires et un autre qui calcule la fonction d’activation. Logiquement, ce dernier a besoin des valeurs crées par le premier, donc l’attribution des coefficients aléatoires est la couche neuronale précédente par rapport au calcul de la fonction d’activation, qui est donc situé dans la couche suivante. De manière générale, si l’équation A requiert le résultat de l’équation B, l’équation B sera dans la couche précédente et l’équation A trouvera son expression dans la couche suivante. C’est comme dans un cerveau : pour contempler la beauté d’un tableau de Cézanne j’ai besoin de le voir, donc les neurones engagés directement dans la vision sont dans une couche supérieure et les neurones responsables des gloussements d’admiration font la couche suivante.

Pourquoi parler des couches plutôt que des neurones singuliers ? C’est une découverte que même moi, un néophyte à peine initié aux fondements des réseaux neuronaux, je comprends déjà : lorsque je place des neurones multiples dans la même couche fonctionnelle du réseau, je peux les mettre en compétition, c’est-à-dire les neurones de la couche suivante peuvent choisir entre les résultats différents apportés par les neurones distincts de la couche précédente. J’ai commencé à tester ce truc dans « Surpopulation sauvage ou compétition aux États-Unis ». Par ailleurs, j’avais alors découvert à peu près la même chose qu’Olawoyin et Chen (2018) présentent dans leur article : plus de complexité dans l’architecture d’un réseau neuronal crée plutôt plus de possibilités que plus de précision prédictive. Quand il s’agit de prédiction strictement dite, plus simple le réseau plus de précision il donne. En revanche, lorsqu’il est question de formuler des hypothèses alternatives précises, plus de complexité élargit le répertoire des comportements possibles du perceptron et donne plus d’envergure dans la description des états alternatifs de la même situation.  

Je continue à vous fournir de la bonne science, presque neuve, juste un peu cabossée dans le processus de conception. Je vous rappelle que vous pouvez télécharger le business plan du projet BeFund (aussi accessible en version anglaise). Vous pouvez aussi télécharger mon livre intitulé “Capitalism and Political Power”. Je veux utiliser le financement participatif pour me donner une assise financière dans cet effort. Vous pouvez soutenir financièrement ma recherche, selon votre meilleur jugement, à travers mon compte PayPal. Vous pouvez aussi vous enregistrer comme mon patron sur mon compte Patreon . Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon travail ? Vous pouvez me contacter à travers la boîte électronique de ce blog : goodscience@discoversocialsciences.com .


[1] Olawoyin, A., & Chen, Y. (2018). Predicting the Future with Artificial Neural Network. Procedia Computer Science, 140, 383-392.

Thinking Poisson, or ‘WTF are the other folks doing?’

My editorial on You Tube

I think I have just put a nice label on all those ideas I have been rummaging in for the last 2 years. The last 4 months, when I have been progressively initiating myself at artificial intelligence, have helped me to put it all in a nice frame. Here is the idea for a book, or rather for THE book, which I have been drafting for some time. « Our artificial intelligence »: this is the general title. The first big chapter, which might very well turn into the first book out of a whole series, will be devoted to energy and technological change. After that, I want to have a go at two other big topics: food and agriculture, then laws and institutions.

I explain. What does it mean « Our artificial intelligence »? As I have been working with an initially simple algorithm of a neural network, and I have been progressively developing it, I understood a few things about the link between what we call, fault of a better word, artificial intelligence, and the way my own brain works. No, not my brain. That would be an overstatement to say that I understand fully my own brain. My mind, this is the right expression. What I call « mind » is an idealized, i.e. linguistic description of what happens in my nervous system. As I have been working with a neural network, I have discovered that artificial intelligence that I make, and use, is a mathematical expression of my mind. I project my way of thinking into a set of mathematical expressions, made into an algorithmic sequence. When I run the sequence, I have the impression of dealing with something clever, yet slightly alien: an artificial intelligence. Still, when I stop staring at the thing, and start thinking about it scientifically (you know: initial observation, assumptions, hypotheses, empirical check, new assumptions and new hypotheses etc.), I become aware that the alien thing in front of me is just a projection of my own way of thinking.

This is important about artificial intelligence: this is our own, human intelligence, just seen from outside and projected into electronics. This particular point is an important piece of theory I want to develop in my book. I want to compile research in neurophysiology, especially in the neurophysiology of meaning, language, and social interactions, in order to give scientific clothes to that idea. When we sometimes ask ourselves whether artificial intelligence can eliminate humans, it boils down to asking: ‘Can human intelligence eliminate humans?’. Well, where I come from, i.e. Central Europe, the answer is certainly ‘yes, it can’. As a matter of fact, when I raise my head and look around, the same answer is true for any part of the world. Human intelligence can eliminate humans, and it can do so because it is human, not because it is ‘artificial’.

When I think about the meaning of the word ‘artificial’, it comes from the Latin ‘artificium’, which, in turn, designates something made with skill and demonstrable craft. Artificium means seasoned skills made into something durable so as to express those skills. Artificial intelligence is a crafty piece of work made with one of the big human inventions: mathematics. Artificial intelligence is mathematics at work. Really at work, i.e. not just as another idealization of reality, but as an actual tool. When I study the working of algorithms in neural networks, I have a vision of an architect in Ancient Greece, where the first mathematics we know seem to be coming from. I have a wall and a roof, and I want them both to hold in balance, so what is the proportion between their respective lengths? I need to learn it by trial and error, as I haven’t any architectural knowledge yet. Although devoid of science, I have common sense, and I make small models of the building I want (have?) to erect, and I test various proportions. Some of those maquettes are more successful than others. I observe, I make my synthesis about the proportions which give the least error, and so I come up with something like the Pythagorean z2 = x2 + y2, something like π = 3,14 etc., or something like the discovery that, for a given angle, the tangent proportion y/x makes always the same number, whatever the empirical lengths of y and x.

This is exactly what artificial intelligence does. It makes small models of itself, tests the error resulting from comparison between those models and something real, and generalizes the observation of those errors. Really: this is what a face recognition piece of software does at an airport, or what Google Ads does. This is human intelligence, just unloaded into a mathematical vessel. This is the first discovery that I have made about AI. Artificial intelligence is actually our own intelligence. Studying the way AI behaves allows seeing, like under a microscope, the workings of human intelligence.

The second discovery is that when I put a neural network to work with empirical data of social sciences, it produces strange, intriguing patterns, something like neighbourhoods of the actual reality. In my root field of research – namely economics – there is a basic concept that we, economists, use a lot and still wonder what it actually means: equilibrium. It is an old observation that networks of exchange in human societies tend to find balance in some precise proportions, for example proportions between demand, supply, price and quantity, or those between labour and capital.

Half of economic sciences is about explaining the equilibriums we can empirically observe. The other half employs itself at discarding what that first half comes up with. Economic equilibriums are something we know that exists, and constantly try to understand its mechanics, but those states of society remain obscure to a large extent. What we know is that networks of exchange are like machines: some designs just work, some others just don’t. One of the most important arguments in economic sciences is whether a given society can find many alternative equilibriums, i.e. whether it can use optimally its resources at many alternative proportions between economic variables, or, conversely, is there just one point of balance in a given place and time. From there on, it is a rabbit hole. What does it mean ‘using our resources optimally’? Is it when we have the lowest unemployment, or when we have just some healthy amount of unemployment? Theories are welcome.

When trying to make predictions about the future, using the apparatus of what can now be called classical statistics, social sciences always face the same dilemma: rigor vs cognitive depth. The most interesting correlations are usually somehow wobbly, and mathematical functions we derive from regression always leave a lot of residual errors.    

This is when AI can step in. Neural networks can be used as tools for optimization in digital systems. Still, they have another useful property: observing a neural network at work allows having an insight into how intelligent structures optimize. If I want to understand how economic equilibriums take shape, I can observe a piece of AI producing many alternative combinations of the relevant variables. Here comes my third fundamental discovery about neural networks: with a few, otherwise quite simple assumptions built into the algorithm, AI can produce very different mechanisms of learning, and, consequently, a broad range of those weird, yet intellectually appealing, alternative states of reality. Here is an example: when I make a neural network observe its own numerical properties, such as its own kernel or its own fitness function, its way of learning changes dramatically. Sounds familiar? When you make a human being performing tasks, and you allow them to see the MRI of their own brain when performing those tasks, the actual performance changes.

When I want to talk about applying artificial intelligence, it is a good thing to return to the sources of my own experience with AI, and explain it works. Some sequences of mathematical equations, when run recurrently many times, behave like intelligent entities: they experiment, they make errors, and after many repeated attempts they come up with a logical structure that minimizes the error. I am looking for a good, simple example from real life; a situation which I experienced personally, and which forced me to learn something new. Recently, I went to Marrakech, Morocco, and I had the kind of experience that most European first-timers have there: the Jemaa El Fna market place, its surrounding souks, and its merchants. The experience consists in finding your way out of the maze-like structure of the alleys adjacent to the Jemaa El Fna. You walk down an alley, you turn into another one, then into still another one, and what you notice only after quite a few such turns is that the whole architectural structure doesn’t follow AT ALL the European concept of urban geometry.  

Thus, you face the length of an alley. You notice five lateral openings and you see a range of lateral passages. In a European town, most of those lateral passages would lead somewhere. A dead end is an exception, and passages between buildings are passages in the strict sense of the term: from one open space to another open space. At Jemaa El Fna, its different: most of the lateral ways lead into deep, dead-end niches, with more shops and stalls inside, yet some other open up into other alleys, possibly leading to the main square, or at least to a main street.

You pin down a goal: get back to the main square in less than… what? One full day? Just kidding. Let’s peg that goal down at 15 minutes. Fault of having a good-quality drone, equipped with thermovision, flying over the whole structure of the souk, and guiding you, you need to experiment. You need to test various routes out of the maze and to trace those, which allow the x ≤ 15 minutes time. If all the possible routes allowed you to get out to the main square in exactly 15 minutes, experimenting would be useless. There is any point in experimenting only if some from among the possible routes yield a suboptimal outcome. You are facing a paradox: in order not to make (too much) errors in your future strolls across Jemaa El Fna, you need to make some errors when you learn how to stroll through.

Now, imagine a fancy app in your smartphone, simulating the possible errors you can make when trying to find your way through the souk. You could watch an imaginary you, on the screen, wandering through the maze of alleys and dead-ends, learning by trial and error to drive the time of passage down to no more than 15 minutes. That would be interesting, wouldn’t it? You could see your possible errors from outside, and you could study the way you can possibly learn from them. Of course, you could always say: ‘it is not the real me, it is just a digital representation of what I could possibly do’. True. Still, I can guarantee you: whatever you say, whatever strong the grip you would try to keep on the actual, here-and-now you, you just couldn’t help being fascinated.

Is there anything more, beyond fascination, in observing ourselves making many possible future mistakes? Let’s think for a moment. I can see, somehow from outside, how a copy of me deals with the things of life. Question: how does the fact of seeing a copy of me trying to find a way through the souk differ from just watching a digital map of said souk, with GPS, such as Google Maps? I tried the latter, and I have two observations. Firstly, in some structures, such as that of maze-like alleys adjacent to Jemaa El Fna, seeing my own position on Google Maps is of very little help. I cannot put my finger on the exact reason, but my impression is that when the environment becomes just too bizarre for my cognitive capacities, having a bird’s eye view of it is virtually no good. Secondly, when I use Google Maps with GPS, I learn very little about my route. I just follow directions on the screen, and ultimately, I get out into the main square, but I know that I couldn’t reproduce that route without the device. Apparently, there is no way around learning stuff by myself: if I really want to learn how to move through the souk, I need to mess around with different possible routes. A device that allows me to see how exactly I can mess around looks like having some potential.

Question: how do I know that what I see, in that imaginary app, is a functional copy of me, and how can I assess the accuracy of that copy? This is, very largely, the rabbit hole I have been diving into for the last 5 months or so. The first path to follow is to look at the variables used. Artificial intelligence works with numerical data, i.e. with local instances of abstract variables. Similarity between the real me, and the me reproduced as artificial intelligence is to find in the variables used. In real life, variables are the kinds of things, which: a) are correlated with my actions, both as outcomes and as determinants b) I care about, and yet I am not bound to be conscious of caring about.

Here comes another discovery I made on my journey through the realm of artificial intelligence: even if, in the simplest possible case, I just make the equations of my neural network so as they represent what I think is the way I think, and I drop some completely random values of the relevant variables into the first round of experimentation, the neural network produces something disquietingly logical and coherent. In other words, if I am even moderately honest in describing, in the form of equations, my way of apprehending reality, the AI I thus created really processes information in the way I would.  

Another way of assessing the similarity between a piece of AI and myself is to compare the empirical data we use: I can make a neural network think more or less like me if I feed it with an accurate description of my so-far experience. In this respect, I discovered something that looks like a keystone in my intellectual structure: as I feed my neural network with more and more empirical data, the scope of the possible ways to learning something meaningful narrows down. When I minimise the amount of empirical data fed into the network, the latter can produce interesting, meaningful results via many alternative sequences of equations. As the volume of real-life information swells, some sequences of equations just naturally drop off the game: they drive the neural network into a state of structural error, when it stops performing calculations.

At this point, I can see some similarity between AI and quantum physics. Quantum mechanics have grown as a methodology, as they proved to be exceptionally accurate in predicting the outcomes of experiments in physics. That accuracy was based on the capacity to formulate very precise hypotheses regarding empirical reality, and the capacity to increase the precision of those hypotheses through the addition of empirical data from past experiments.  

Those fundamental observations I made about the workings of artificial intelligence have progressively brought me to use AI in social sciences. An analytical tool has become a topic of research for me. Happens all the time in science, mind you. Geometry, way back in the day, was a thoroughly practical set of tools, which served to make good boats, ships and buildings. With time, geometry has become a branch of science on its own rights. In my case, it is artificial intelligence. It is a tool, essentially, invented back in the 1960ies and 1970ies, and developed over the last 20 years, and it serves practical purposes: facial identification, financial investment etc. Still, as I have been working with a very simple neural network for the last 4 months, and as I have been developing the logical structure of that network, I am discovering a completely new opening in my research in social sciences.

I am mildly obsessed with the topic of collective human intelligence. I have that deeply rooted intuition that collective human behaviour is always functional regarding some purpose. I perceive social structures such as financial markets or political institutions as something akin to endocrine systems in a body: complex set of signals with a random component in their distribution, and yet a very coherent outcome. I follow up on that intuition by assuming that we, humans, are most fundamentally, collectively intelligent regarding our food and energy base. We shape our social structures according to the quantity and quality of available food and non-edible energy. For quite a while, I was struggling with the methodological issue of precise hypothesis-making. What states of human society can be posited as coherent hypotheses, possible to check or, fault of checking, to speculate about in an informed way?

The neural network I am experimenting with does precisely this: it produces strange, puzzling, complex states, defined by the quantitative variables I use. As I am working with that network, I have come to redefining the concept of artificial intelligence. A movie-based approach to AI is that it is fundamentally non-human. As I think about it sort of step by step, AI is human, as it has been developed on the grounds of human logic. It is human meaning, and therefore an expression of human neural wiring. It is just selective in its scope. Natural human intelligence has no other way of comprehending but comprehending IT ALL, i.e. the whole of perceived existence. Artificial intelligence is limited in scope: it works just with the data we assign it to work with. AI can really afford not to give a f**k about something otherwise important. AI is focused in the strict sense of the term.

During that recent stay in Marrakech, Morocco, I had been observing people around me and their ways of doing things. As it is my habit, I am patterning human behaviour. I am connecting the dots about the ways of using energy (for the moment I haven’t seen any making of energy, yet) and food. I am patterning the urban structure around me and the way people live in it.

Superbly kept gardens and buildings marked by a sense of instability. Human generosity combined with somehow erratic behaviour in the same humans. Of course, women are fully dressed, from head to toes, but surprisingly enough, men too. With close to 30 degrees Celsius outside, most local dudes are dressed like a Polish guy would dress by 10 degrees Celsius. They dress for the heat as I would dress for noticeable cold. Exquisitely fresh and firm fruit and vegetables are a surprise. After having visited Croatia, on the Southern coast of Europe, I would rather expect those tomatoes to be soft and somehow past due. Still, they are excellent. Loads of sugar in very nearly everything. Meat is scarce and tough. All that has been already described and explained by many a researcher, wannabe researchers included. I think about those things around me as about local instances of a complex logical structure: a collective intelligence able to experiment with itself. I wonder what other, hypothetical forms could this collective intelligence take, close to the actually observable reality, as well as some distance from it.

The idea I can see burgeoning in my mind is that I can understand better the actual reality around me if I use some analytical tool to represent slight hypothetical variations in said reality. Human behaviour first. What exactly makes me perceive Moroccans as erratic in their behaviour, and how can I represent it in the form of artificial intelligence? Subjectively perceived erraticism is a perceived dissonance between sequences. I expect a certain sequence to happen in other people’s behaviour. The sequence that really happens is different, and possibly more differentiated than what I expect to happen. When I perceive the behaviour of Moroccans as erratic, does it connect functionally with their ways of making and using food and energy?  

A behavioural sequence is marked by a certain order of actions, and a timing. In a given situation, humans can pick their behaviour from a total basket of Z = {a1, a2, …, az} possible actions. These, in turn, can combine into zPk = z!/(z – k)! = (1*2*…*z) / [1*2*…*(z – k)] possible permutations of k component actions. Each such permutation happens with a certain frequency. The way a human society works can be described as a set of frequencies in the happening of those zPk permutations. Well, that’s exactly what a neural network such as mine can do. It operates with values standardized between 0 and 1, and these can be very easily interpreted as frequencies of happening. I have a variable named ‘energy consumption per capita’. When I use it in the neural network, I routinely standardize each empirical value over the maximum of this variable in the entire empirical dataset. Still, standardization can convey a bit more of a mathematical twist and can be seen as the density of probability under the curve of a statistical distribution.

When I feel like giving such a twist, I can make my neural network stroll down different avenues of intelligence. I can assume that all kinds of things happen, and all those things are sort of densely packed one next to the other, and some of those things are sort of more expected than others, and thus I can standardize my variables under the curve of the normal distribution. Alternatively, I can see each empirical instance of each variable in my database as a rare event in an interval of time, and then I standardize under the curve of the Poisson distribution. A quick check with the database I am using right now brings an important observation: the same empirical data standardized with a Poisson distribution becomes much more disparate as compared to the same data standardized with the normal distribution. When I use Poisson, I lead my empirical network to divide sharply empirical data into important stuff on the one hand, and all the rest, not even worth to bother about, on the other hand.

I am giving an example. Here comes energy consumption per capita in Ecuador (1992) = 629,221 kg of oil equivalent (koe), Slovak Republic (2000) = 3 292,609 koe, and Portugal (2003) = 2 400,766 koe. These are three different states of human society, characterized by a certain level of energy consumption per person per year. They are different. I can choose between three different ways of making sense out of their disparity. I can see them quite simply as ordinals on a scale of magnitude, i.e. I can standardize them as fractions of the greatest energy consumption in the whole sample. When I do so, they become: Ecuador (1992) =  0,066733839, Slovak Republic (2000) =  0,349207223, and Portugal (2003) =  0,254620211.

In an alternative worldview, I can perceive those three different situations as neighbourhoods of an expected average energy consumption, in the presence of an average, standard deviation from that expected value. In other words, I assume that it is normal that countries differ in their energy consumption per capita, as well as it is normal that years of observation differ in that respect. I am thinking normal distribution, and then my three situations come as: Ecuador (1992) = 0,118803134, Slovak Republic (2000) = 0,556341893, and Portugal (2003) = 0,381628627.

I can adopt an even more convoluted approach. I can assume that energy consumption in each given country is the outcome of a unique, hardly reproducible process of local adjustment. Each country, with its energy consumption per capita, is a rare event. Seen from this angle, my three empirical states of energy consumed per capita could occur with the probability of the Poisson distribution, estimated with the whole sample of data. With this specific take on the thing, my three empirical values become: Ecuador (1992) = 0, Slovak Republic (2000) = 0,999999851, and Portugal (2003) = 9,4384E-31.

I come back to Morocco. I perceive some behaviours in Moroccans as erratic. I think I tend to think Poisson distribution. I expect some very tightly defined, rare event of behaviour, and when I see none around, I discard everything else as completely not fitting the bill. As I think about it, I guess most of our human intelligence is Poisson-based. We think ‘good vs bad’, ‘edible vs not food’, ‘friend vs foe’ etc.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Deux intelligences alternatives

Mon éditorial sur You Tube

Me voilà à nouveau avec de l’énergie. Mon énergie à moi, bien sûr, mais aussi le sujet de l’énergie. Je donne satisfaction à mes trois obsessions scientifiques. Une, les solutions financières pour encourager la transition vers les énergies renouvelables. Deux, le lien entre les marchés financiers et le changement technologique. Trois, application de l’intelligence artificielle à l’étude de l’intelligence collective.

Dans ma dernière mise à jour en anglais – « We, the average national economy. Research and case study in finance » – j’ai commencé à esquisser la direction de ma recherche. J’ai plus ou moins repris le chemin analytique déjà signalé dans « Surpopulation sauvage ou compétition aux États-Unis » et je l’ai élargi à un échantillon plus grand de 56 pays. Apparemment, la croissance de l’efficience énergétique dans l’économie mondiale, de $8,08 par kilogramme d’équivalent pétrole en 1990 jusqu’à $10,76 en 2014, était accompagnée d’une accumulation presque équivalente en magnitude de capital, aussi bien d’actifs fixes que des soldes monétaires. Le truc intéressant c’est que ces deux composantes d’actifs du bilan de l’économie mondiale semblent garder une proportion plus ou moins constante l’une vis-à-vis de l’autre. En d’autres mots, un système complexe qui, dans ma base de données utilisée pour cette recherche, se compose de 56 pays, garde une liquidité plus ou moins constante tout en accumulant du capital et en accroissant son efficience énergétique.

Ça a tout l’air d’une intelligence collective : un système qui n’a aucune chance d’avoir un cerveau central et qui néanmoins se comporte comme un organisme. Il y a d’autre recherche qui en quelque sorte corrobore cette approche. Il y a ce modèle appelé MUSIASEM (Andreoni 2017[1] ; Velasco-Fernández et al 2018[2]) qui fournit une preuve empirique convaincante qu’en ce qui concerne l’énergie et l’efficience de son utilisation, l’économie mondiale se comporte comme un métabolisme adaptatif, dont l’adaptation se manifeste, entre autres, par un réarrangement géographique des moyens de production.

Je retourne donc, avec la persévérance d’un ivrogne qui essaie d’ouvrir la mauvaise porte d’entrée avec la bonne clé, au sujet de l’intelligence artificielle.  Je viens d’expérimenter un peu avec le réseau neuronal que j’utilise dans ce créneau spécifique de recherche et voilà qu’une fois de plus, cette chose m’a surpris. Je vous donne ici un compte rendu sélectif de ces surprises. Pour une description détaillée de la façon dont marche ce réseau neuronal précis, vous pouvez vous référer à « Surpopulation sauvage ou compétition aux États-Unis ». En passant du cas des États-Unis à l’échantillon général de plusieurs pays, j’ai juste ajouté une variable de plus, que j’avais déjà utilisé dans le passé (consultez, par exemple « Deux lions de montagne, un bison mort et moi ») : le déficit alimentaire par personne. C’est une variable des plus structurelles : elle est très idiosyncratique pays par pays, tout en restant très stable dans le temps. Immatriculation idéale d’un pays. D’autre part, moi, je suis ce chemin de découverte où j’assume que la nourriture, le pétrole et l’électricité se joignent, à un certain niveau, comme des manifestations différentes de la capacité de notre espèce de transformer l’énergie accessible dans notre environnement.

Alors, les surprises. Jusqu’alors, lorsque je travaillais avec ce réseau neuronal, il marchait à chaque fois. Je veux dire qu’il produisait un résultat dans chaque cas de figure, quoi que je lui impose comme conditions d’apprentissage. D’accord, ces résultats étaient parfois absurdes, mais il y en avait, des résultats. Dans ce cas précis, le réseau neuronal marche juste sous certaines conditions. Il coince souvent, c’est-à-dire il rend une erreur générale du type « NOMBRE ! », lorsque la magnitude des variables atteint des valeurs comme 40 ou – 40, donc lorsque les fonctions d’activation neurale s’affolent, puisqu’elles sont essentiellement faites à procéder avec des valeurs standardisées entre 0 et 1 (entre -1 et 1 pour la hyper-tangentielle). C’est du nouveau et moi, j’aime bien du nouveau. J’aime bien comprendre.

Alors, j’essaie de comprendre. Qu’est-ce qui a changé dans les conditions de départ, par rapport aux applications précédentes de ce même réseau neuronal ? Ce qui a changé très certainement c’est la quantité et la complexité des données empiriques originelles, donc de ce qui constitue le matériel primaire d’apprentissage. Dans ce cas précis, je donne à mon réseau neuronal N = 1228 cas « pays réel – année donnée ». Auparavant, je lui donnais entre 20 et 25 de telles incidences. J’ai envie de rire. De moi-même, je veux dire. C’est tellement évident ! Lorsque j’apprends quelque chose, la façon de le faire dépend de la complexité des informations d’entrée. Plus ces informations sont riches et complexes, plus de finesse je dois démontrer dans mon apprentissage. Apprendre à changer un tuyau sous mon levier de cuisine est simple. Apprendre la plomberie en général, y compris la méthode de changer une valve à gaz, est une tâche plus difficile, qui requiert une approche différente.

J’utilise un réseau neuronal pour simuler le comportement de l’intelligence collective d’une société. J’assume que les valeurs des variables empiriques représentent autant d’états différents et temporaires des processus distincts de changement social. La simulation d’intelligence collective, telle que la fait mon réseau neuronal, commence avec une assomption importante : toutes les variables pris en compte sont divisées en deux catégories, où une variable est considérée comme celle de résultat et toutes les autres comme celles d’entrée. J’assume que l’entité intelligente est orientée sur l’optimisation de la variable de résultat et les variables d’entrée sont instrumentales à cet effet. J’implique une fonction vitale dans mon entité intelligente. Je sais que les réseaux neuronaux beaucoup plus avancés que le mien sont capables de définir cette fonction par eux-mêmes et j’ai même quelques idées comment inclure cette composante dans mon propre réseau. Quoi qu’il en soit, une fonction vitale est quelque chose à avoir dans un réseau neuronal. Sans elle, à quoi bon ? Je veux dire, s’il n’y a rien à achever, la vie perd son sens et l’intelligence se réduit à la capacité de commander un autre verre et à consulter Twitter pour la millionième fois.

Lorsque je considère l’intelligence collective d’une société réelle et je définis sa fonction vitale de la façon décrite ci-dessus, c’est une simplification grossière. Comme j’approche cette fonction vitale sous un angle purement mathématique, ça a plus de sens. La variable de résultat est celle à laquelle mon réseau neuronal touche relativement le moins : il la modifie beaucoup moins que les variables d’entrée. La distinction entre la variable de résultat et les variables d’entrée signifie qu’une variable dans le lot – celle de résultat – ancre la simulation d’intelligence collective dans un contexte similaire à celui, connu à tous les économistes, de caeteris paribus, ou « autres facteurs constants ». Je peux donc orienter ma simulation de façon à montrer les états possibles de réalité sociales sous des différentes ancres de résultat. Qu’est-ce qui se passe si j’ancre mon système social à un certain niveau d’efficience énergétique ? Comment l’état hypothétique de cette société, produit par le réseau neuronal, va changer avec une autre ancre de résultat ? Quelles différences de comportement produis-je sous des fonctions vitales différentes ?

Maintenant, question de langage. Le réseau neuronal parle nombres. Il comprend les données numériques et il communique des résultats numériques. En principe, le langage numérique des fonctions d’activation de base, celui du sigmoïde et la hyper-tangentielle, se limite aux valeurs numériques standardisées entre 0 et 1. En fait, la hyper-tangentielle est un peu plus polyglotte et comprend aussi du patois entre -1 et 0. Dans ma communication avec le réseau neuronal j’encontre donc deux défis linguistiques : celui de parler à cette chose en des nombres standardisés qui correspondent aussi étroitement que possible à la réalité, et celui de comprendre correctement les résultats numériques rendus par le réseau.

J’ai donc cette base de données, N = 1228 occurrences « pays < > année », et je traduis les valeurs empiriques dedans en des valeurs standardisées. La procédure de base, la plus simple, consiste à calculer le maximum observé pour chaque variable séparément et ensuite diviser chaque valeur empirique de cette variable par ledit maximum. Si je ne me trompe, ça s’appelle « dénomination ». Dans une approche plus élaborée, je peux standardiser sous la courbe de distribution normale. C’est ce que vous avez comme standardisation dans des logiciels statistiques. Il y a un petit problème avec les valeurs empiriques qui, après standardisation, sont égales rigoureusement à 0 ou 1. En théorie, il faudrait les transformer en des machins comme 0,001 ou 0,999. En fait, s’il n’y en a pas beaucoup, de ces « 0 » et ces « 1 » dans l’échantillon offert à mon réseau neuronal comme matériel d’apprentissage, je peux les ignorer.

La question de langage sur laquelle je me concentre maintenant est celle de compréhension de ce que le réseau neuronal rend comme résultat. Mathématiquement, ce résultat est égal à xf = xi + ∑e , où xf est la valeur finale crachée par le réseau, xi est la valeur initiale, et ∑e est la somme d’erreurs locales ajoutée à la valeur initiale après n rondes d’expérimentation. Supposons que je fais n = 3000 rondes d’expérimentation. Qu’est-ce qu’exactement ma valeur finale xf ? Est-ce la valeur obtenue dans la ronde no. 3000 ? C’est ce que j’assume souvent, mais il y a des « mais » contre cette approche. Premièrement, si les erreurs locales « e » accumulées par le réseau sont généralement positives, les valeurs xf obtenues dans cette dernière ronde sont d’habitude plus élevées que les initiales. Quelles contorsions que je fasse avec la standardisation, xf = max(xi ; xf) et inévitablement xf > xi.

Encore, ce n’est pas le plus dur des cas. Il y a des situations où les erreurs locales sont plutôt négatives que positives et après leur accumulation j’ai ∑e < 0 et xf = xi + ∑e < 0 également. Vachement embarrassant. Puis-je avoir une offre négative d’argent ou une efficience énergétique négative ?

Je peux faire une esquive élégante à travers le théorème de de Moivre – Laplace et assumer que dans un grand nombre des valeurs expérimentales rendues par le réseau neuronal la valeur espérée est leur moyenne arithmétique, soit xf = [∑(xi + ei)] / n. Élégant, certes, mais est-ce une interprétation valide du langage dont le réseau neuronal me parle ? L’intelligence artificielle est une forme d’intelligence. Ça peut créer de la signification et pas seulement adopter la signification que je lui impose. Est-ce que ça parle de Moivre – Laplace ? Allez savoir…

Bon, ça c’est de la philosophie. Temps de passer à l’expérimentation en tant que telle. Je reprends plus ou moins le perceptron décrit dans « Surpopulation sauvage ou compétition aux États-Unis » : une couche neuronale d’entrée et observation, une couche de combine (attribution des coefficients de pondération, ainsi que de fonctions d’adaptation locale aux données observées), une couche d’activation (deux fonctions parallèles : sigmoïde et hyper-tangentielle) et finalement une couche de sélection. Dans cette dernière, j’introduis deux mécanismes complexes et alternatifs de décision. Tous les deux assument qu’une intelligence collective humaine démontre deux tendances contradictoires. D’une part, nous sommes collectivement capables de nous ouvrir à du nouveau, donc de relâcher la cohérence mutuelle entre les variables qui nous représentent. D’autre part, nous avons une tolérance limitée à la dissonance cognitive. Au-delà de ce seuil de tolérance nous percevons le surplus du nouveau comme du mauvais et nous nous protégeons contre. Le premier mécanisme de sélection prend la moindre erreur des deux. Les deux neurones dans la couche d’activation produisent des activations concurrentes et le neurone de sélection, dans ce schéma-ci, choisit l’activation qui produit la moindre valeur absolue d’erreur. Pourquoi valeur absolue et non pas l’erreur en tant que telle ? Eh bien, l’erreur d’activation peut très bien être négative. Ça arrive tout le temps. Si j’ai une erreur négative et une positive, la moindre valeur des deux sera, arithmétiquement, l’erreur négative, même si son écart de la valeur d’activation est plus grand que celui de l’erreur positive. Moi, je veux minimiser l’écart et je le minimise dans l’instant. Je prends l’expérience qui me donne moins de dissonance cognitive dans l’instant.

 

Le deuxième mécanisme de sélection consiste à tirer la moyenne arithmétique des deux erreurs et de la diviser ensuite par un coefficient proportionnel au nombre ordinal de la ronde d’expérimentation. Cette division se fait uniquement dans les rondes d’expérimentation strictement dite, pas dans la phase d’apprentissage sur les données réelles. J’explique cette distinction dans un instant. Ce mécanisme de sélection correspond à une situation où nous, l’intelligence collective, sommes rationnels dans l’apprentissage à partir de l’expérience directe de réalité empirique – donc nous pondérons toute la réalité de façon uniforme – mais dès que ça vient à expérimentation pure, nous réduisons la dissonance cognitive dans le temps. Nous percevons l’expérience antérieure comme plus importante que l’expérience subséquente.

Le réseau neuronal travaille en deux étapes. D’abord, il observe les données empiriques, donc les N = 1228 occurrences « pays < > année » dans la base de données de départ. Il les observe activement : à partir de l’observation empirique n = 2 il ajoute l’erreur sélectionnée dans la ronde précédente aux valeurs standardisées des variables d’entrée et il touche pas à la variable de résultat. Après les 1228 rondes d’apprentissage le réseau passe à 3700 rondes d’expérimentation. Je ne sais pas pourquoi, mais j’aime arrondir le boulot total de mon perceptron à 5000 rondes au total. En tout cas, dans les 3700 rondes d’expérimentation, le réseau ajoute l’erreur de la ronde précédente aux variables d’entrée calculées dans la même ronde précédente.

En ce qui concerne le travail avec les variables d’entrée, le perceptron accumule l’expérience en forme d’une moyenne mouvante. Dans la première ronde d’expérimentation, le neurone d’observation dans la première couche du réseau tire la moyenne arithmétique des 1228 valeurs de la phase d’apprentissage et il y ajoute l’erreur sélectionnée pour propagation dans la dernière, 1228ième ronde d’apprentissage. Dans la deuxième ronde d’expérimentation, le perceptron tire la moyenne arithmétique des 1227 rondes d’apprentissage et de la première ronde d’expérimentation et il y ajoute l’erreur sélectionnée dans la première ronde d’expérimentation et ainsi de suite. La couche d’entrée du réseau est donc un peu conservative et perçoit les résultats d’expériences nouvelles à travers la valeur espérée, qui, à son tour, est construite sur la base du passé. Ça a l’air familier, n’est-ce pas ? En revanche, en ce qui concerne la variable de résultat, le perceptron est plus conservatif. Il tire la moyenne arithmétique des 1228 valeurs empiriques, comme valeur espérée, et il s’y tient. Encore une fois, je veux simuler une tendance à réduire la dissonance cognitive.

Côté langage, je teste deux manières d’écouter à ce que me dit mon perceptron. La première consiste à prendre, classiquement si j’ose dire, les valeurs standardisées produites par la dernière, 3700ième ronde expérimentale et les de-standardiser en les multipliant par les maximums enregistrés empiriquement dans la base de données de départ. Dans la deuxième méthode, je tire la moyenne arithmétique de toute la distribution de la variable donnée, donc valeurs empiriques et valeurs expérimentales prises ensemble. Je raisonne en termes du théorème de de Moivre – Laplace et j’assume que la signification d’un grand ensemble des nombres est la valeur espérée, soit la moyenne arithmétique.

En ce qui concerne mes variables, leur catalogue général est donné dans le tableau ci-dessous. Après le tableau, je continue avec la description.

Tableau 1

Code de la variable Description de la variable
Q/E PIB par kg d’équivalent pétrole d’énergie consommé (prix constants, 2011 PPP $) – VARIABLE DE RÉSULTAT
CK/PA Capital immobilisé moyen par une demande nationale de brevet (millions de 2011 PPP $, prix constants)
A/Q Amortissement agrégé d’actifs fixes comme % du PIB
PA/N Demandes nationales de brevet par 1 million d’habitants
M/Q Offre agrégée d’argent comme % du PIB
E/N Consommation finale d’énergie en kilogrammes d’équivalent pétrole par tête d’habitant
RE/E Consommation d’énergie renouvelable comme % de la consommation totale d’énergie
U/N Population urbaine comme % de la population totale
Q Produit Intérieur Brut (millions de 2011 PPP $, prix constants)
Q/N PIB par tête d’habitant (2011 PPP $, prix constants)
N Population
DA/N Déficit alimentaire par tête d’habitant (kcal par jour)

Je fais travailler mon réseau neuronal avec ces variables avec 4 fonctions vitales différentes, donc en mettant 4 variables différentes dans la catégorie de résultat à optimiser : le déficit alimentaire par personne, population urbaine comme % de la population totale, efficience énergétique de l’économie, et finalement les actifs fixes par une demande de brevet. En ce qui concerne l’importance que j’attache à cette dernière variable, vous pouvez consulter « My most fundamental piece of theory ». J’ai choisi les variables que je considère intuitivement comme structurelles. Intuitivement, j’ai dit.

Au départ, les moyennes arithmétiques de mes variables – donc leur valeurs statistiquement espérées – sont les suivantes :

Q/E = $8,72 par kg d’équivalent pétrole ;

CK/PA = $3 534,8 par demande de brevet ;

A/Q = 14,2% du PIB ;

PA/N = 158,9 demandes de brevet par 1 million d’habitants ;

M/Q = 74,6% du PIB en masse monétaire ;

E/N = 3007,3 kg d’équivalent pétrole par personne par an ;

DA/N = 26,4 kcal par personne par jour ;

RE/E = 16,05% de la consommation totale d’énergie ;

U/N = 69,7% de la population ;

Q = $1 120 874,23 mln ;

Q/N = $22 285,63 par tête d’habitant  ;

N = 89 965 651 personnes ;

Ça, c’est le point empirique de départ. C’est une société relativement opulente, quoi qu’avec des petits problèmes alimentaires, plutôt grande, moyennement avide d’énergie, et généralement moyenne, comme c’était à espérer. Deux variables font exception à cette tendance : le pourcentage de population urbaine et l’offre d’argent. L’urbanisation moyenne mondiale est à présent aux environs de 55%, pendant que notre échantillon se balance vers 70%. L’offre d’argent dans l’économie mondiale est couramment de presque 125% du PIB et notre échantillon fait gentiment 74,6%. Maintenant, allons voir ce que le réseau neuronal peut apprendre si sa fonction vitale est orientée sur un déficit alimentaire stable par personne par jour, donc DA/N est la variable de résultat. Tableaux no. 2 et 3, ci-dessous, présentent les résultats d’apprentissage, pendant que les Graphes 1 – 4, plus loin, donnent un aperçu de la manière dont le réseau apprend sous des conditions différentes.

Je commence par discuter la méta-variable de base : l’erreur locale du réseau. Graphes 1 et 2 donnent une idée de différence entre les deux stratégies d’apprentissage sous considération. L’apprentissage par la moindre erreur est paradoxal. Durant les 1228 rondes empiriques, il conduit effectivement à la réduction de l’erreur, comme tout gentil perceptron devrait le faire. Néanmoins, dès que le réseau passe à expérimenter avec lui-même, l’erreur croît à chaque ronde consécutive. Le réseau se balance de plus en plus entre des états alternatifs. Intéressant : lorsque le réseau est programmé pour choisir la moindre erreur, il génère de plus en plus d’erreur. En revanche, l’apprentissage par erreur moyenne décroissante – donc la stratégie qui reflète une tendance croissante à réduire la dissonance cognitive – ça marche de façon modèle. L’erreur dans la phase empirique est réduite à un niveau très bas et ensuite, dans la phase d’expérimentation pure, elle tend vers zéro.

Lorsque je passe à la fonction d’adaptation, donc à la distance Euclidienne moyenne entre les variables du réseau (Graphes 3 et 4) la différence entre les deux stratégies d’apprentissage est un peu moins prononcée, quoi que visible. Dans les deux cas, la cohésion interne du réseau change en deux phases bien distinctes. Aussi longtemps que le perceptron travaille avec les 1228 observations empiriques, sa cohésion oscille très fortement. Dès que ça passe à expérimenter avec soi-même, les oscillations s’éteignent, mais de deux façons différentes. Le perceptron qui choisit la moindre erreur et apprend uniformément dans le temps (Graphe 3) fixe sa cohésion interne à un niveau relativement bas et ensuite il accroît à nouveau l’amplitude d’oscillation. En revanche, le perceptron qui tire la moyenne de ses erreurs locales et démontre une résistance croissante aux informations nouvelles (Graphe 4) se tient très fermement au niveau de cohésion atteint vers la fin de la phase d’apprentissage sur les données empiriques.

Je vois ici deux intelligences différentes, qui représentent deux façons de représenter un phénomène bien connu, celui de résistance à la dissonance cognitive. Le perceptron qui apprend par la moindre erreur réduit sa dissonance sur le champ et localement, sans le faire à long terme. Celui qui apprend par l’erreur moyenne et la divise par le nombre ordinal de la ronde consécutive d’expérimentation agit différemment : il tolère plus d’erreur localement mais se ferme progressivement sur le long terme.

Dans la mesure où je les considère comme représentations d’une intelligence collective, j’y vois des analogies intéressantes à notre ordre social. Le perceptron qui apprend par la moindre erreur semble plus intelligent que celui qui tire l’erreur moyenne et se raidit à mesure d’apprendre. C’est comme si des controverses locales à propos des changements climatiques étaient plus fertiles en apprentissage qu’un système de savoir très codifié et rigide.

En ce qui concerne les résultats, les deux intelligences alternatives se comportent aussi de manière très différente. En général, l’intelligence qui choisit la moindre erreur locale mais s’en fout du passage de temps (Tableau 2) produit des valeurs plus élevées que celle qui tire l’erreur moyenne et développe le sentiment d’avoir appris tout ce qu’il y avait à apprendre (Tableau 3). En fait, la première ajoute à toutes les variables du perceptron, pendant que la deuxième les réduit toutes.

Je veux me pencher sur l’interprétation de ces nombres, donc sur la façon de comprendre ce que le réseau neuronal veut me dire. Les nombres du tableau 2 semblent vouloir dire que si nous – la civilisation – voulons accroître notre efficience énergétique, il nous faut accroître significativement la cadence de l’innovation. Je le vois surtout dans le pourcentage du PIB pris par l’amortissement d’actifs fixes : la variable A/Q. Plus ce pourcentage est élevé, plus rapide est la cadence de rotation des technologies. Pour avoir une efficience énergétique moyenne, comme civilisation, à un niveau à peine 50% plus élevé que maintenant, il nous faudrait accélérer la rotation des technologies d’à peu près 25%.

Il y a une variable collatérale à l’innovation, dans ma base de données : CK/PA ou le coefficient d’actifs fixes par une demande de brevet. C’est en quelque sorte le montant de capital qu’une invention moyenne peut se nourrir avec. Dans cette simulation avec le réseau neuronal vous pouvez voir que les différences de magnitude de CK/PA sont tellement grandes qu’elles en deviennent intéressantes. Le perceptron qui apprend avec la résistance croissante à l’information nouvelle donne des valeurs négatives de CK/PA, ce qui semble absurde. Absurde, peut-être, mais pourquoi ? C’est l’une de ces situations lorsque je me pose des questions de fond sur ce qu’est intelligence collective.

Tableau 2

Apprentissage par la moindre erreur, uniforme dans le temps
Valeurs de la 3700ième ronde expérimentale  

Valeurs des moyennes espérées

 

Q/E = $15,80 par kg d’équivalent pétrole ;

 

CK/PA = $78 989,68 par demande de brevet ;

 

A/Q = 25% du PIB ;

 

PA/N = 1 426,24 demandes de brevet par 1 million d’habitants;

 

M/Q = 167,49% du PIB en masse monétaire ;

 

E/N = 7 209,06 kg d’équivalent pétrole par personne par an ;

 

RE/E = 37,45% de consommation totale d’énergie en renouvelables ;

 

U/N = 115,88% ( ! ) de la population en villes ;

 

Q = $7 368 088,87 mln ;

 

Q/N = $63 437,19 par tête d’habitant ;

 

N = 553 540 602 personnes ;

 

Variable de résultat >> DA/N = 26,40 kcal par personne par jour

Q/E = $12,16 par kg d’équivalent pétrole ;

 

CK/PA = $42 171,01 par demande de brevet ;

 

A/Q = 19% du PIB ;

 

PA/N = 770,85 demandes de brevet par 1 million d’habitants ;

 

M/Q = 120,56% du PIB en masse monétaire ;

 

E/N = 5 039,16 kg d’équivalent pétrole par personne par an ;

 

RE/E = 25,34% de consommation totale d’énergie en renouvelables ;

 

U/N = 77,21% de la population totale en villes;

 

Q = $3 855 530,27 mln ;

 

Q/N = $41 288,52 par tête d’habitant ;

 

N = 295 288 302 personnes ;

 

Variable de résultat >> DA/N = 26,40 kcal par personne par jour

Tableau 3

Apprentissage par erreur moyenne décroissante à mesure des rondes d’expérimentation
Valeurs de la 3700ième ronde expérimentale  

Valeurs des moyennes espérées

 

Q/E = $7,41 par kg d’équivalent pétrole ;

 

CK/PA = ($2 228,03) par demande de brevet ;

 

A/Q = 11% du PIB ;

 

PA/N = 101,89 demandes de brevet par 1 mln d’habitants ;

 

M/Q = 71,93% du PIB en masse monétaire ;

 

E/N = 3 237,24 kg d’équivalent pétrole par personne par an ;

RE/E = 10,21% de la consommation totale d’énergie en renouvelables ;

 

U/N  = 65% de la population totale en villes ;

 

Q = $730 310,21 mln ;

 

Q/N = $25 095,49 par tête d’habitant ;

 

N = 15 716 495 personnes ;

 

Variable de résultat >> DA/N = 26,40 kcal par personne par jour ;

Q/E = $8,25 par kg d’équivalent pétrole ;

 

CK/PA = ($3 903,81) par demande de brevet ;

 

A/Q = 14% du PIB ;

 

PA/N = 101,78 demandes de brevet par 1 mln d’habitants ;

 

M/Q = 71,52% du PIB en masse monétaire ;

 

E/N = 3 397,75 kg d’équivalent pétrole par personne par an ;

 

RE/E = 12,64%  de la consommation totale d’énergie en renouvelables ;

 

U/N = 75,46% de la population totale en villes ;

 

Q = $615 711,51 mln  ;

 

Q/N = $24 965,23 par tête d’habitant ;

 

N = 2 784 733,90 personnes ;

 

Variable de résultat >> DA/N = 26,40 kcal par personne par jour ;

Graphe 1

Graphe 2

Graphe 3

Graphe 4

Je continue à vous fournir de la bonne science, presque neuve, juste un peu cabossée dans le processus de conception. Je vous rappelle que vous pouvez télécharger le business plan du projet BeFund (aussi accessible en version anglaise). Vous pouvez aussi télécharger mon livre intitulé “Capitalism and Political Power”. Je veux utiliser le financement participatif pour me donner une assise financière dans cet effort. Vous pouvez soutenir financièrement ma recherche, selon votre meilleur jugement, à travers mon compte PayPal. Vous pouvez aussi vous enregistrer comme mon patron sur mon compte Patreon . Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon travail ?

[1] Andreoni, V. (2017). Energy Metabolism of 28 World Countries: A Multi-scale Integrated Analysis. Ecological Economics, 142, 56-69

[2] Velasco-Fernández, R., Giampietro, M., & Bukkens, S. G. (2018). Analyzing the energy performance of manufacturing across levels using the end-use matrix. Energy, 161, 559-572

We, the average national economy. Research and case study in finance.

 

My editorial on You Tube

 

I am returning to a long-followed path of research, that on financial solutions for promoting renewable energies, and I am making it into educational content for my course of « Fundamentals of Finance ». I am developing on artificial intelligence as well. I think that artificial intelligence is just made for finance. Financial markets, and the contractual patterns they use, are akin endocrine systems. They generate signals, more or less complex, and those signals essentially say: ‘you lazy f**ks, you need to move and do something, and what that something is supposed to be you can read from between the lines of those financial instruments in circulation’. Anyway, what I am thinking about is to use artificial intelligence for simulating the social change that a financial scheme, i.e. a set of financial instruments, can possibly induce in the ways we produce and use energy. This update is at the frontier of scientific research, business planning, and education strictly spoken. I know that some students can find it hard to follow, but I just want to show real science at work, 100% pure beef.

 

I took a database which I have already used in my research on the so-called energy efficiency, i.e. on the amount of Gross Domestic Product we can derive on the basis of 1 kilogram of oil equivalent. It is a complex indicator of how efficient a given social system is as regards using energy for making things turn on the economic side. We take the total consumption of energy in a given country, and we convert it into standardized units equivalent to the amount of energy we can have out of one kilogram of natural oil. This standardized consumption of energy becomes the denominator of a coefficient, where the nominator consists in the Gross Domestic Product. Thus, it goes like “GDP / Energy consumed”. The greater the value of that coefficient, i.e. the more dollars we derive from one unit of energy, the greater is the energy efficiency of our economic system.

 

Since 2012, the global economy has been going through an unprecedentedly long period of expansion in real output[1]. Whilst the obvious question is “When will it crash?”, it is interesting to investigate the correlates of this phenomenon in the sector of energy. In other terms, are we, as a civilisation more energy-efficient as we get (temporarily) much more predictable in terms of economic growth? The very roots of this question are to find in the fundamental mechanics of our civilisation. We, humans, are generally good at transforming energy. There is a body of historical and paleontological evidence that accurate adjustment of energy balance was one of the key factors in the evolutionary success of humans, both at the level of individual organisms and whole communities (Leonard, Robertson 1997[2]; Robson, Wood 2008[3]; Russon 2010[4])

When we talk about energy efficiency of the human civilisation, it is useful to investigate the way we consume energy. In this article, the question is being tackled by observing the pace of growth in energy efficiency, defined as GDP per unit of energy use (https://data.worldbank.org/indicator/EG.GDP.PUSE.KO.PP.KD?view=chart ). The amount of value added we can generate out of a given set of production factors, when using one unit of energy, is an interesting metric. It shows energy efficiency as such, and, in the same time, the relative complexity of the technological basket we use. As stressed, for example, by Moreau and Vuille (2018[5]), when studying energy intensity, we need to keep in mind the threefold distinction between: a) direct consumption of energy b) transport c) energy embodied in goods and services.

One of the really deep questions one can ask about the energy intensity of our culture is to what extent it is being shaped by short-term economic fluctuations. Ziaei (2018[6]) proved empirically that observable changes in energy intensity of the U.S. economy are substantial, in response to changes in monetary policy. There is a correlation between the way that financial markets work and the consumption of energy. If the relative increase in energy consumption is greater than the pace of economic growth, GDP created with one unit of energy decreases, and vice versa. There is also a mechanism of reaction of the energy sector to public policies. In other words, some public policies have significant impact on the energy efficiency of the whole economy. Different sectors of the economy respond with different intensity, as for their consumption of energy, to public policies and to changes in financial markets. We can assume that a distinct sector of the economy corresponds to a distinct basket of technologies, and a distinct institutional outset.

Faisal et al. (2017[7]) found a long-run correlation between the consumption of energy and real output of the economy, studying the case of Belgium. Moreover, the same authors found significant causality from real output to energy consumption, and that causality seems to be uni-directional, without any significant, reciprocal loop.

Energy efficiency of national economies, as measured with the coefficient of GDP per unit of energy (e.g. per kg of oil equivalent), should take into account that any given market is a mix of goods – products and services – which generate aggregate output. Any combination “GDP <> energy use” is a combination of product markets, as well as technologies (Heun et al. 2018[8]).

There is quite a fruitful path of research, which assumes that aggregate use of energy in an economy can be approached in a biological way, as a metabolic process. The MuSIASEM methodological framework seems to be promising in this respect (e.g. Andreoni 2017[9]). This leads to a further question: can changes in the aggregate use of energy be considered as adaptive changes in an organism, or in generations of organisms? In another development regarding the MuSIASEM framework, Velasco-Fernández et al (2018[10]) remind that real output per unit of energy consumption can increase, on a given basis of energy supply, through factors other than technological change towards greater efficiency in energy use. This leads to investigating the very nature of technological change at the aggregate level. Is aggregate technological change made only of engineering improvements at the microeconomic level, or maybe the financial reshuffling of the economic system counts, too, as adaptive technological change?

The MuSIASEM methodology stresses the fact that international trade, and its accompanying financial institutions, allow some countries to externalise industrial production, thus, apparently, to decarbonise their economies. Still, the industrial output they need takes place, just somewhere else.

From the methodological point of view, the MuSIASEM approach explores the compound nature of energy efficiency measured as GDP per unit of energy consumption. Energy intensity can be understood at least at two distinct levels: aggregate and sectoral. At the aggregate level, all the methodological caveats make the « GDP per kg of oil equivalent » just a comparative metric, devoid of much technological meaning. At the sectoral level, we get closer to technology strictly spoken.

There is empirical evidence that at the sectoral level, the consumption of energy per unit of aggregate output tends to: a) converge across different entities (regions, entrepreneurs etc.) b) tends to decrease (see for example: Yu et al. 2012[11]).

There is also empirical evidence that general aging of the population is associated with a lower energy intensity, and urbanization has an opposite effect, i.e. it is positively correlated with energy intensity (Liu et al. 2017[12])

It is important to understand, how and to what extent public policies can influence the energy efficiency at the macroeconomic scale. These policies can either address directly the issue of thermodynamic efficiency of the economy, or just aim at offshoring the most energy – intensive activities. Hardt et al. (2018[13]) study, in this respect, the case of United Kingdom, where each percentage of growth in real output has been accompanied, those last years, by a 0,57% reduction in energy consumption per capita.

There is grounds for claiming that increasing energy efficiency of national economies matters more for combatting climate change that the strictly spoken transition towards renewable energies (Weng, Zhang 2017[14]). Still, other research suggest that the transition towards renewable energies has an indirectly positive impact upon the overall energy efficiency: economies that make a relatively quick transition towards renewables seem to associate that shift with better efficiency in using energy for creating real output (Akalpler, Shingil 2017[15]).

It is to keep in mind that the energy efficiency of national economies has two layers, namely the efficiency of producing energy in itself, as distinct from the usage we make of the so-obtained net energy. This is the concept of Energy Return on Energy Invested (EROI), (see: Odum 1971[16]; Hall 1972[17]). Changes in energy efficiency can occur on both levels, and in this respect, the transition towards renewable sources of energy seems to bring more energy efficiency in that first layer, i.e. in the extraction of energy strictly spoken, as compared with fossil fuels. The problematically slow growth in energy efficiency could be coming precisely from the de-facto decreasing efficiency of transformation in fossil fuels (Sole et al. 2018[18]).

 

Technology and social structures are mutually entangled (Mumford 1964[19], McKenzie 1984[20], Kline and Pinch 1996[21]; David 1990[22], Vincenti 1994[23]; Mahoney 1988[24]; Ceruzzi 2005[25]). An excellent, recent piece of research by Taalbi (2017[26]) attempts a systematic, quantitative investigation of that entanglement.

The data published by the World Bank regarding energy use per capita in kg of oil equivalent (OEPC) (https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE ) allows an interesting insight, when combined with structural information provided by the International Energy Agency (https://www.iea.org). As one ranks countries regarding their energy use per capita, the resulting hierarchy is, in the same time, a hierarchy in the broadly spoken socio-economic development. Countries displaying less than 200 kg of oil equivalent per capita are, in the same time, barely structured as economies, with little or no industry and transport infrastructure, with quasi-inexistent institutional orders, and with very limited access to electricity at the level of households and small businesses.  In the class comprised between 200 kg OEPC and approximately 600 ÷ 650 kg OEPC, one can observe countries displaying progressively more and more development in their markets and infrastructures, whilst remaining quite imbalanced in their institutional sphere. Past the mark of 650 OEPC, stable institutions are observable. Interestingly, the officially recognised threshold of « middle income », as macroeconomic attribute of whole nations, seems corresponding to a threshold in energy use around 1 500 kg OEPC. The neighbourhood of those 1 500 kg OEPC looks like the transition zone between developing economies, and the emerging ones. This is the transition towards really stable markets, accompanied by well-structured industrial networks, as well as truly stable public sectors. Finally, as income per capita starts qualifying a country into the class of « developed economies », that country is most likely to pass another mark of energy consumption, that of 3000 kg OEPC. This stylized observation of how energy consumption is linked to social structures is partly corroborated by other research, e.g. that regarding social equality in the access to energy (see for example: Luan, Chen 2018[27])

The nexus of energy use per capita, on the one hand, and institutions on the other hand, has even found a general designation in recent literature: “energy justice”. A cursory review of that literature demonstrates the depth of emotional entanglement between energy and social structures: it seems to be more about the connection between energy and self-awareness of societies than about anything else (see for example: Fuller, McCauley 2016[28]; Broto et al. 2018[29]). The difficulty in getting rid of emotionally grounded stereotypes in this path of research might have its roots in the fact that we can hardly understand what energy really is, and attempts at this understanding send us to the very foundations of our understanding as for what reality is (Coelho 2009[30]; McKagan et al. 2012[31]; Frontali 2014[32]). Recent research, conducted from the point of view of management science reveal just as recent an emergence of new, virtually unprecedented, institutional patterns in the sourcing and the use of energy. A good example of that institutional change is to find in the new role of cities as active players in the design and implementation of technologies and infrastructures critical for energy efficiency (see for example: Geels et al. 2016[33]; Heiskanen et al. 2018[34]; Matschoss, Heiskanen 2018[35]).

 

Changes observable in the global economy, with respect to energy efficiency measured as GDP per unit of energy consumed, are interestingly accompanied by those in the supply of money, urbanization, as well as the shift towards renewable energies. Years 2008 – 2010, which marked, with a deep global recession, the passage towards currently experienced, record-long and record-calm period of economic growth, displayed a few other interesting transitions. In 2008, the supply of broad money in the global economy exceeded, for the first documented time, 100% of the global GDP, and that coefficient of monetization (i.e. the opposite of the velocity of money) has been growing ever since (World Bank 2018[36]). Similarly, the coefficient of urbanization, i.e. the share of urban population in the global total, exceeded 50% in 2008, and has kept growing since (World Bank 2018[37]). Even more intriguingly, the global financial crisis of 2007 – 2009 took place exactly when the global share of renewable energies in the total consumption of energy was hitting a trough, below 17%, and as the global recovery started in 2010, that coefficient started swelling as well, and has been displaying good growth since then[38]. Besides, empirical data indicates that since 2008, the share of aggregate amortization (of fixed assets) in the global GDP has been consistently growing, after having passed the cap of 15% (Feenstra et al. 2015[39]). Some sort of para-organic pattern emerges out of those observations, where energy efficiency of the global economy is being achieved through more intense a pace of technological change, in the presence of money acting as a hormone, catabolizing real output and fixed assets, whilst anabolizing new generations of technologies.

 

Thus, I have that database, which you can download precisely by clicking this link. One remark: this is an Excel file, and when you click on the link, it downloads without further notice. There is no opening on the screen. In this set, we have 12 variables: i) GDP per unit of energy use (constant 2011 PPP $ per kg of oil equivalent) ii) Fixed assets per 1 resident patent application iii) Share of aggregate depreciation in the GDP – speed of technological obsolescence iv) Resident patent applications per 1 mln people v) Supply of broad money as % of GDP vi)

Energy use per capita (kg of oil equivalent) vii) Depth of the food deficit (kilocalories per person per day) viii) Renewable energy consumption (% of total final energy consumption) ix) Urban population as % of total population x) GDP (demand side) xi) GDP per capita, and finally xii) Population. My general, intuitive idea is to place energy efficiency in a broad socio-economic context, and to see what role in that context is being played by financial liquidity. In simpler words, I want to discover how can the energy efficiency of our civilization be modified by a possible change in financial liquidity.

 

My database is a mix-up of 59 countries and years of observation ranging from 1960 to 2014, 1228 records in total. Each record is the state of things, regarding the above-named variables, in a given year. In quantitative research we call it a data panel. You have bits of information inside and you try to make sense out of it. I like pictures. Thus, I made some. These are the two graphs below. One of them shows the energy efficiency of national economies, the other one focuses on the consumption of energy per capita, and both variables are being shown as a function of supply of broad money as % of GDP. I consider the latter to be a crude measure of financial liquidity in the given place and time. The more money is being supplied per unit of Gross Domestic Product, the more financial liquidity people have as for doing something with them units of GDP. As you can see, the thing goes really all over the place. You can really say: ‘that is a cloud of points’. As it is usually the case with clouds, you can see any pattern in it, except anything mathematically regular. I can see a dung beetle in the first one, and a goose flapping its wings in the second. Many possible connections exist between the basic financial liquidity of the economic system, on the one hand, and the way we use energy, on the other hand.

 

I am testing my database for general coherence. In the table below, I am showing the arithmetical average of each variable. As you hopefully know, since Abraham de Moivre we tend to assume that arithmetical average of a large sample of something is the expected value of that something. Thus, the table below shows what we can reasonably expect from the database. We can see a bit of incoherence. Mean energy efficiency is $8,72 per kg of oil equivalent in energy. Good. Now, I check. I take the energy consumption per capita and I multiply in by the number of capitae, thus I go 3 007,28 * 89 965 651 =  270 551 748,43 tons of oil equivalent. This is the amount of energy consumed in one year by the average expected national society of homo sapiens in my database. Now, I divide the average expected GDP in the sample, i.e. $1 120 874,23 mln, by that expected total consumption of energy, and I hit just $1 120 874,23 mln / 270 551 748,43 tons = $4,14 per kilogram.

 

It is a bit low, given that a few sentences ago the same variable was supposed to be$8,72 per kg. This is just a minor discrepancy as compared to the GDP per capita, which is the central measure of wealth in a population. The average calculated straight from the database is $22 285,63. Cool. This is quite a lot, you know. Now, I check. I take the aggregate average GDP per country, i.e.  $1 120 874,23 mln, and I divide it by the average headcount of population, i.e. I go $1 120 874 230 000 / 89 965 651 =  $12 458,91. What? $12 458,91 ? But it was supposed to be is $22 285,63! Who took those 10 thousand dollars away from me? I mean, $12 458,91 is quite respectable, it is just a bit below my home country, Poland, presently, but still… Ten thousand dollars of difference? How is it possible?

 

It is so embarrassing when numbers are not what we expect them to be. As a matter of fact, they usually aren’t. It is just our good will that makes them look so well fitting to each other. Still, this is what numbers do, when they are well accounted for: they embarrass. As they do so, they force us to think, and to dig meaning out from underneath the numbers. This is what quantitative analysis in social sciences is supposed to do: give us the meaning that we expect when we measure things about our own civilisation.

 

Table 1 – Average values from the pooled database of N = 1228 country-year observations

Variable Average expected value from empirical data, N = 1228 records
GDP per unit of energy use (constant 2011 PPP $ per kg of oil equivalent) 8,72
Fixed assets per 1 resident patent application (constant 2011 PPP $) 3 534,80
Share of aggregate depreciation in the GDP – speed of technological obsolescence 14%
Resident patent applications per 1 mln people – speed of invention 158,90
Supply of broad money % of GDP – observed financial liquidity 74,60%
Energy use (kg of oil equivalent per capita) 3 007,28 kg
Depth of the food deficit (kilocalories per person per day) 26,40
Renewable energy consumption (% of total final energy consumption) 16,05%
Urban population as % of total population 69,70%
GDP (demand side; millions of constant 2011 PPP $) 1 120 874,23
GDP per capita (constant 2011 PPP $) $22 285,63
Population 89 965 651

 

Let’s get back to the point, i.e. to finance. As I explain over and over again to my students, when we say ‘finance’, we almost immediately need to say: ‘balance sheet’. We need to think in terms of a capital account. Those expected average values from the table can help us to reconstruct at least the active side of that representative, expected, average economy in my database. There are three variables which sort of overlap: a) fixed assets per 1 resident patent application b) resident patent applications per 1 mln people and c) population. I divide the nominal headcount of population by 1 000 000, and thus I get population denominated in millions. I multiply the so-denominated population by the coefficient of resident patent applications per 1 mln people, which gives me, for each country and each year of observation, the absolute number of patent applications in the set. In my next step, I take the coefficient of fixed assets per 1 patent application, and I multiply it by the freshly-calculated-still-warm absolute number of patent applications.

 

Now, just to make it arithmetically transparent, when I do (« Fixed assets » / « Patent applications ») * « Patent applications », I take a fraction and I multiply it by its own denominator. It is de-factorisation. I stay with just the nominator of that initial fraction, thus with the absolute amount of fixed assets. For my representative, average, expected country in the database, I get Fixed Assets = $50 532 175,96 mln.

 

I do slightly the same with money. I take “Supply of money as % of the GDP”, and I multiply it by the incriminated GDP, which makes Money Supplied = 74,60% * $1 120 874,23 mln =  $836 213,98 mln. We have a fragment in the broader balance sheet of our average expected economy: Fixed Assets $50 532 175,96 mln and Monetary Balances $836 213,98 mln. Interesting. How does it unfold over time? Let’s zeee… A bit of rummaging, and I get the contents of Table 2, below. There are two interesting things about that table.

 

 

Table 2 – Changes over time in the capital account of the average national economy

Year Average fixed assets per national economy, $ mln constant 2011 PPP GDP per unit of energy use (constant 2011 PPP $ per kg of oil equivalent), in the average national economy Supply of broad money in average national economy, $ mln constant 2011 PPP Money to fixed assets
1990 2 036 831,928 8,08 61,526 0,0030%
1991 1 955 283,198 8,198 58,654 0,0030%
1992 2 338 609,511 8,001 61,407 0,0026%
1993 2 267 728,024 7,857 60,162 0,0027%
1994 2 399 075,082 7,992 60,945 0,0025%
1995 2 277 869,991 7,556 60,079 0,0026%
1996 2 409 816,67 7,784 64,268 0,0027%
1997 2 466 046,108 7,707 71,853 0,0029%
1998 2 539 482,259 7,76 77,44 0,0030%
1999 2 634 454,042 8,085 82,987 0,0032%
2000 2 623 451,217 8,422 84,558 0,0032%
2001 2 658 255,842 8,266 88,335 0,0033%
2002 2 734 170,979 8,416 92,739 0,0034%
2003 2 885 480,779 8,473 97,477 0,0034%
2004 3 088 417,325 8,638 100,914 0,0033%
2005 3 346 005,071 8,877 106,836 0,0032%
2006 3 781 802,623 9,106 119,617 0,0032%
2007 4 144 895,314 9,506 130,494 0,0031%
2008 4 372 927,883 9,57 140,04 0,0032%
2009 5 166 422,174 9,656 171,191 0,0033%
2010 5 073 697,622 9,62 164,804 0,0032%
2011 5 702 948,813 9,983 178,381 0,0031%
2012 6 039 017,049 10,112 195,487 0,0032%
2013 6 568 280,779 10,368 205,159 0,0031%
2014 5 559 781,782 10,755 161,435 0,0029%

 

This is becoming really interesting. Both components in the capital account of the representative, averaged economy had been growing until 2013, then it fell. Energy efficiency has been growing quite consistently, as well. The ratio of money to assets, thus a crude measure of financial liquidity in this capital account, remains sort of steady, with a slight oscillation. You can see it in the graph below. I represented all the variables as fixed-base indexes: the value recorded for the year 2000 is 1,00, and any other value is indexed over that one. We do that thing all the time, in social sciences, when we want to study apparently incompatible magnitudes. A little test of Pearson correlation, and… Yesss! Energy efficiency is Pearson correlated with the amount of fixed assets at r = 0,953096394, and with the amount of money supplied at r = 0,947606073. All that in the presence of more or less steady a liquidity.

 

Provisional conclusion: the more capital we accumulate, we, the average national economy, the more energy efficient we are, and we sort of dynamically adjust to keep the liquidity of that capital, at least the strictly monetary liquidity, at a constant level.

 

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

 

[1] https://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG

[2] Leonard, W.R., and Robertson, M.L. (1997). Comparative primate energetics and hominoid evolution. Am. J. Phys. Anthropol. 102, 265–281.

[3] Robson, S.L., and Wood, B. (2008). Hominin life history: reconstruction and evolution. J. Anat. 212, 394–425

[4] Russon, A. E. (2010). Life history: the energy-efficient orangutan. Current Biology, 20(22), pp. 981- 983.

[5] Moreau, V., & Vuille, F. (2018). Decoupling energy use and economic growth: Counter evidence from structural effects and embodied energy in trade. Applied Energy, 215, 54-62.

[6] Ziaei, S. M. (2018). US interest rate spread and energy consumption by sector (Evidence of pre and post implementation of the Fed’s LSAPs policy). Energy Reports, 4, 288-302.

[7] Faisal, F., Tursoy, T., & Ercantan, O. (2017). The relationship between energy consumption and economic growth: Evidence from non-Granger causality test. Procedia Computer Science, 120, 671-675

[8] Heun, M. K., Owen, A., & Brockway, P. E. (2018). A physical supply-use table framework for energy analysis on the energy conversion chain. Applied Energy, 226, 1134-1162

[9] Andreoni, V. (2017). Energy Metabolism of 28 World Countries: A Multi-scale Integrated Analysis. Ecological Economics, 142, 56-69

[10] Velasco-Fernández, R., Giampietro, M., & Bukkens, S. G. (2018). Analyzing the energy performance of manufacturing across levels using the end-use matrix. Energy, 161, 559-572

[11] Yu, S., Wei, Y. M., Fan, J., Zhang, X., & Wang, K. (2012). Exploring the regional characteristics of inter-provincial CO2 emissions in China: An improved fuzzy clustering analysis based on particle swarm optimization. Applied energy, 92, 552-562

[12] Liu, F., Yu, M., & Gong, P. (2017). Aging, Urbanization, and Energy Intensity based on Cross-national Panel Data. Procedia computer science, 122, 214-220

[13] Hardt, L., Owen, A., Brockway, P., Heun, M. K., Barrett, J., Taylor, P. G., & Foxon, T. J. (2018). Untangling the drivers of energy reduction in the UK productive sectors: Efficiency or offshoring?. Applied Energy, 223, 124-133.

[14] Weng, Y., & Zhang, X. (2017). The role of energy efficiency improvement and energy substitution in achieving China’s carbon intensity target. Energy Procedia, 142, 2786-2790.

[15] Akalpler, E., & Shingil, M. E. (2017). Statistical reasoning the link between energy demand, CO2 emissions and growth: Evidence from China. Procedia Computer Science, 120, 182-188.

[16] Odum, H.T. (1971) Environment, Power, and Society, Wiley, New York, NY, 1971.

[17] Hall, C.A.S., (1972) Migration and metabolism in a temperate stream ecosystem, Ecology, vol. 53 (1972), pp. 585 – 604.

[18] Solé, J., García-Olivares, A., Turiel, A., & Ballabrera-Poy, J. (2018). Renewable transitions and the net energy from oil liquids: A scenarios study. Renewable Energy, 116, 258-271.

[19] Mumford, L., 1964, Authoritarian and Democratic Technics, Technology and Culture, Vol. 5, No. 1 (Winter, 1964), pp. 1-8

[20] MacKenzie, D., 1984, Marx and the Machine, Technology and Culture, Vol. 25, No. 3. (Jul., 1984), pp. 473-502.

[21] Kline, R., Pinch, T., 1996, Users as Agents of Technological Change : The Social Construction of the Automobile in the Rural United States, Technology and Culture, vol. 37, no. 4 (Oct. 1996), pp. 763 – 795

[22] David, P. A. (1990). The dynamo and the computer: an historical perspective on the modern productivity paradox. The American Economic Review, 80(2), 355-361.

[23] Vincenti, W.G., 1994, The Retractable Airplane Landing Gear and the Northrop “Anomaly”: Variation-Selection and the Shaping of Technology, Technology and Culture, Vol. 35, No. 1 (Jan., 1994), pp. 1-33

[24] Mahoney, M.S., 1988, The History of Computing in the History of Technology, Princeton, NJ, Annals of the History of Computing 10(1988), pp. 113-125

[25] Ceruzzi, P.E., 2005, Moore’s Law and Technological Determinism : Reflections on the History of Technology, Technology and Culture, vol. 46, July 2005, pp. 584 – 593

[26] Taalbi, J. (2017). What drives innovation? Evidence from economic history. Research Policy, 46(8), 1437-1453.

[27] Duan, C., & Chen, B. (2018). Analysis of global energy consumption inequality by using Lorenz curve. Energy Procedia, 152, 750-755.

[28] Fuller S, McCauley D. Framing energy justice: perspectives from activism and advocacy. Energy Res Social Sci 2016;11:1–8.

[29] Broto, V. C., Baptista, I., Kirshner, J., Smith, S., & Alves, S. N. (2018). Energy justice and sustainability transitions in Mozambique. Applied Energy, 228, 645-655.

[30] Coelho, R. L. (2009). On the concept of energy: History and philosophy for science teaching. Procedia-Social and Behavioral Sciences, 1(1), 2648-2652.

[31] McKagan, S. B., Scherr, R. E., Close, E. W., & Close, H. G. (2012, February). Criteria for creating and categorizing forms of energy. In AIP Conference Proceedings (Vol. 1413, No. 1, pp. 279-282). AIP.

[32] Frontali, C. (2014). History of physical terms:‘Energy’. Physics Education, 49(5), 564.

[33] Geels, F., Kern, F., Fuchs, G., Hinderer, N., Kungl, G., Mylan, J., Neukirch, M., Wassermann, S., 2016. The enactment of socio-technical transition pathways: a reformulated typology and a comparative multi-level analysis of the German and UK low-carbon electricity transitions (1990–2014). Res. Policy 45, 896–913.

[34] Heiskanen, E., Apajalahti, E. L., Matschoss, K., & Lovio, R. (2018). Incumbent energy companies navigating the energy transitions: Strategic action or bricolage?. Environmental Innovation and Societal Transitions.

[35] Matschoss, K., & Heiskanen, E. (2018). Innovation intermediary challenging the energy incumbent: enactment of local socio-technical transition pathways by destabilization of regime rules. Technology Analysis & Strategic Management, 30(12), 1455-1469

[36] https://data.worldbank.org/indicator/FM.LBL.BMNY.GD.ZS last accessed November 25th, 2018

[37] https://data.worldbank.org/indicator/SP.URB.TOTL.IN.ZS last accessed November 25th, 2018

[38] https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS last accessed November 25th, 2018

[39] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at www.ggdc.net/pwt

Locally smart. Case study in finance.

 

My editorial on You Tube

 

Here I go, at the frontier between research and education. This is how I earn my living, basically, combining research and education. I am presenting and idea I am currently working on, in a team, regarding a financial scheme for local governments. I am going to develop it here as a piece of educational material for my course « Fundamentals of Finance ». I am combining educational explanation with specific techniques of scientific research.

 

Here is the deal: creating a financial scheme, combining pooled funds, crowdfunding, securities, and cryptocurriences, for facilitating smart urban development through the creation of local start-up businesses. A lot of ideas in one concept, but this is science, for one, and thus anything is possible, and this is education, for two, hence we need to go through as many basic concepts as possible. It goes more or less as follows: a local government creates two financial instruments, a local investment fund, and a local crowdfunding platform. Both serve to facilitate the creation and growth of local start-ups, which, in turn, facilitate smart urban development.

 

We need a universe in order to do anything sensible. Good. Let’s make a universe, out of local governments, local start-up businesses, and local projects in smart urban development. Projects are groups of people with a purpose and a commitment to achieve it together. Yes, wars are projects, just as musical concerts and public fundraising campaigns for saving the grey wolf. Projects in smart urban development are groups of people with a purpose and a commitment to do something interesting about implementing new technologies into the urban infrastructures and this improving the quality, and the sustainability of urban life.

 

A project is like a demon. It needs a physical body, a vessel to carry out the mission at hand. Projects need a physical doorstep to put a clear sign over it. It is called ‘headquarters’, it has an official address, and we usually need it if we want to do something collective and social. This is where letters from the bank should be addressed to. I have the idea to embody local projects of smart urban development in physical bodies of local start-up businesses. This, in turn, implies turning those projects into profitable ventures. What is the point? A business has assets and it has equity. Assets can back equity, and liabilities. Both equity and liabilities can be represented with financial instruments, namely tradable securities. With that, we can do finance.

 

Why securities? The capital I need, and which I don’t have, is the capital somebody is supposed to entrust with me. Thus, by acquiring capital to finance my project, I give other people claims on the assets I am operating with. Those people will be much more willing to entrust me with their capital if those claims are tradable, i.e. when they can back off out of the business really quickly. That’s the idea of financial instruments: making those claims flow and float around, a bit like water.

 

Question: couldn’t we just make securities for projects, without embodying them in businesses? Problematic. Any financial instrument needs some assets to back it up, on the active side of the balance sheet. Projects, as long as they have no such back up in assets, are not really in a position to issue any securities. Another question: can we embody those projects in institutional forms other than businesses, e.g. foundations, trusts, cooperatives, associations? Yes, we can. Each institutional form has its pluses and its minuses. Business structures have one peculiar trait, however: they have at their disposal probably the broadest range of clearly defined financial instruments, as compared to other institutional forms.

 

Still, we can think out of the box. We can take some financial instruments peculiar to business, and try to transplant them onto another institutional body, like that of an association. Let’s just try and see what happens. I am a project in smart urban development. I go to a notary, and I write the following note: “Whoever hands this note on December 31st of any calendar year from now until 2030, will be entitled to receive 20% of net profits after tax from the business identified as LHKLSHFKSDHF”. Signature, date of signature, stamp by the notary. Looks like a security? Mmmwweeelll, maybe. Let’s try and put it in circulation. Who wants my note? What? What do I want in exchange? Let’s zeeee… The modest sum of $2 000 000? You good with that offer?

 

Some of you will say: you, project, you stop right there and you explain a few things. First of all, what if you really have those profits, and 20% of them really make it worth to hand you $2 000 000 now? How exactly can anyone claim those 20%? How will they know the exact sum they are entitled to? Right, say I (project), we need to write some kind of contract with those rules inside. It can be called corporate bylaw, and we need to write it all down. For example, what if somebody has this note on December 31st, 2025, and then they sell it to someone else on January 2nd, 2026, and the profits for 2025 will be really accounted for like in February 2026 at best, and then, who is entitled to have those 20% of profits: the person who had the note on December 31st, 2025, or the one presenting it in 2026, when all is said and done about profits? Sort of tricky, isn’t it? The note says: ‘Whoever hands this note on December 31st… etc.’, only the act of handing is now separated from the actual disclosure of profits. We keep that in mind: the whole point of making a claim into a security is to make it apt for circulation. If the circulation in itself becomes too troublesome, the security loses a lot of its appeal.

 

See? This note contains a conditional claim. Someone needs to hand the note at the right moment and in the right place, there need to be any profit to share etc. That’s the thing about conditional claims: you need to know exactly how to apprehend those conditions, which the claim is enforceable upon.

 

As I think about the exact contents of that contract, it looks like me and anyone holds that note are partners in business. We are supposed to share profits. Profits come from the exploitation of some assets, and they become real only after all the current liabilities have been paid. Hence, we actually share equity in those assets. The note is an equity-based security, a bit primitive, yes, certainly, still an equity-based security.

 

Another question from the audience: “Project, with all the due respect, I don’t really want to be partners in business with you. Do you have an alternative solution to propose?”. Maybe I have… What do you say about a slightly different note, like “Whoever hands this note on December 31st of any calendar year from now until 2030, will be entitled to receive $500 000 from the bank POIUYTR not later than until January 15th of the next calendar year”. Looks good? You remember what is that type of note? This is a draft, or routed note, a debt-based security. It embodies an unconditional claim, routed on that bank with an interesting name, a bit hard to spell aloud. No conditions attached, thus less paperwork with a contract. Worth how much? Maybe $2 000 000, again?

 

No conditions, yet a suggestion. If, on the one hand, I grant you a claim on 20% of my net profit after tax, and, on the other hand, I am ready to give an unconditional claim on $500 000, you could search some mathematical connection between the 20% and the $500 000. Oh, yes, and there are those $2 000 000. You are connecting the dots. Same window in time, i.e. from 2019 through 2030, which makes 11 occasions to hand the note and claim the money. I multiply occasions by unconditional claims, and I go 11*$500 000 = $5 500 000. An unconditional claim on $5 000 000 spread over 11 annual periods is being sold for $2 000 000. Looks like a ton of good business to do, still let’s do the maths properly. You could invest your $2 000 000 in some comfy sovereign bonds, for example the federal German ones. Rock solid, those ones, and they can yield like 2% a year. I simulate: $2 000 000*(1+0,02)11 =  $2 486 748,62. You pay me $2 000 000, you forego the opportunity to earn $486 748,62, and, in exchange, you receive an unconditional claim on $5 500 000. Looks good, at least at the first sight. Gives you a positive discount rate of ($5 500 000 – $2 486 748,62)/ $2 486 748,62 = 121,2% on the whole 11 years of the deal, thus 121,2%/11 = 11% a year. Not bad.

 

When you have done the maths from the preceding paragraph, you can assume that I expect, in that project of smart urban development, a future stream of net profit after tax, over the 11 fiscal periods to come, somewhere around those $5 500 000. Somewhere around could be somewhere above or somewhere below.  Now, we enter the world of behavioural finance. I have laid my cards on the table, with those two notes. Now, you try to figure out my future behaviour, as well as the behaviour to expect in third parties. When you hold a claim, on whatever and whomever you want, this claim has two financial characteristics: enforceability and risk on the one hand, and liquidity on the other hand. You ask yourself, what exactly can the current holder of the note enforce in terms of payback from my part, and what kind of business you can do by selling those notes to someone else.

 

In a sense, we are playing a game. You face a choice between different moves. Move #1: buy the equity-based paper and hold. Move #2: buy the equity-based one and sell it to third parties. Move #3: buy the debt-based routed note and hold. Move #4: buy the routed note and sell it shortly after. You can go just for one of those moves, or make a basket thereof, if you have enough money to invest more than one lump injection of $2 000 000 into my project of smart urban development.

 

You make your move, and you might wonder what kind of move will I make, and what will other people do. Down that avenue of thinking, madness lies. Finance means, very largely, domesticated madness, and thus, when you are a financial player, instead of wondering what other people will do, you look for reliable benchmarks in the existing markets. This is an important principle of finance: quantities and prices are informative about the human behaviour to expect. When you face the choice between moves #1 ÷ #4, you will look, in the first place, for and upon the existing markets. If I grant you 20% of my profits in exchange of $2 000 000, which, in fact, seem corresponding to at least $500 000 of future annual cash flow. If 20% of something is $500 000, the whole something makes $500 000/ 20% = $2 500 000. How much equity does it correspond to? Here it comes to benchmarking. Aswath Damodaran, from NYU Stern Undergraduate College, publishes average ROE (return on equity) in different industries. Let’s suppose that my project of smart urban development is focused on Environmental & Waste Services. It is urban, it claims being smart, hence it could be about waste management. That makes 17,95% of average ROE, i.e. net profit/equity = 17,95%. Logically, equity = net profit/17,95%, thus I go $2 500 000/17,95% = $13 927 576,60 and this is the equity you can reasonably expect I expect to accumulate in that project of smart urban development.

 

Among the numerous datasets published by Aswath Damodaran, there is one containing the so-called ROIC, or return on invested capital, thus on the total equity and debt invested in the business. In the same industry, i.e. Environmental & Waste Services, it is 13,58%. It spells analogously to ROE, thus it is net profit divided by the total capital invested, and, logically, total capital invested = net profit / ROIC = $2 500 000 / 13,58% = $18 409 425,63. Equity alone makes $13 927 576,60, equity plus debt makes $18 409 425,63, therefore debt = $18 409 425,63 – $13 927 576,60 =  $4 481 849,02.

 

With those rates of return on, respectively, equity and capital invested, those 11% of annual discount, benchmarked against German sovereign bonds, look acceptable. If I take a look at the financial instruments listed in the AIM market of London Stock Exchange, and I dig a bit, I can find corporate bonds, i.e. debt-based securities issued by incorporated business structures. Here come, for example, the bonds issued by 3i Group, an investment fund. They are identified with ISIN (International Securities Identification Number) XS0104440986, they were issued in 1999, and their maturity date is December 3rd, 2032. They are endowed with an interest rate of 5,75% a year, payable in two semi-annual instalments every year. Once again, the 11% discount offered on those imaginary routed notes of my project look interesting in comparison.

 

Before I go further, I am once again going to play at anticipating your questions. What is the connection between the interest rate and the discount rate, in this case? I am explaining numerically. Imagine you buy corporate bonds, like those 3i Group bonds, with an interest rate 5,75% a year. You spend $2 000 000 on them. You hold them for 5 years, and then you sell them to third persons. Just for the sake of simplifying, I suppose you sell them for the same face value you bought them, i.e. for $2 000 000. What happened arithmetically, from your point of view, can be represented as follows: – $2 000 000 + 5*5,75%*$2 000 000 + $2 000 000 = $575 000. Now, imagine that instead of those bonds, you bought, for an amount of $2 000 000,  debt-based routed notes of my project, phrased as follows: “Whoever hands this note on December 31st of any calendar year from now until Year +5, will be entitled to receive $515 000 from the bank POIUYTR not later than until January 15th of the next calendar year”. With such a draft (remember: another name for a routed note), you will total – $2 000 000 + 5*$515 000 = $575 000.

 

Same result at the end of the day, just phrased differently. With those routed notes of mine, I earn a a discount of $575 000, and with the 3i bonds, you earn an interest of $575 000. You understand? Whatever you do with financial instruments, it sums up to a cash flow. You spend your capital on buying those instruments in the first place, and you write that initial expenditure with a ‘-’ sign in your cash flow. Then you receive some ‘+’ cash flows, under various forms, and variously described. At the end of the day, you sum up the initial outflow (minus) of cash with the subsequent inflows (pluses).

 

Now, I look back, I mean back to the beginning of this update on my blog, and I realize how far have I ventured myself from the initial strand of ideas. I was about to discuss a financial scheme, combining pooled funds, crowdfunding, securities, and cryptocurriences, for facilitating smart urban development through the creation of local start-up businesses. Good. I go back to it. My fundamental concept is that of public-private partnership, just peppered with a bit of finance. Local governments do services connected to waste and environmental care. The basic way they finance it is through budgetary spending, and sometimes they create or take interest in local companies specialized in doing it. My idea is to go one step further, and make local governments create and run investment funds specialized in taking interest in such businesses.

 

One of the basic ideas when running an investment fund is to make a portfolio of participations in various businesses, with various temporal horizons attached. We combine the long term with the short one. In some companies we invest for like 10 years, and in some others just for 2 years, and then we sell those shares, bonds, or whatever. When I was working on the business plan for the BeFund project, I had a look at the shape those investment portfolios take. You can sort of follow back that research of mine in « Sort of a classical move » from March 15th, 2018. I had quite a bit of an exploration into the concept of smart cities. See « My individual square of land, 9 meters on 9 », from January 11, 2018, or « Smart cities, or rummaging in the waste heap of culture » from January 31, 2018, as for this topic. What comes out of my research is that the combination of digital technologies with the objectively growing importance of urban structures in our civilisation brings new investment opportunities. Thus, I have this idea of local governments, like city councils, becoming active investors in local businesses, and that local investment would combine the big, steady ventures – like local waste management companies – with a lot of small startup companies.

 

This basic structure in the portfolio of a local investment fund reflects my intuitive take on the way a city works. There is the fundamental, big, heavy stuff that just needs to work – waste management, again, but also water supply, energy supply etc. – and there is the highly experimental part, where the city attempts to implement radically new solutions on the grounds of radically new technologies. The usual policy that I can observe in local governments, now, is to create big local companies for the former category, and to let private businesses take over entirely the second one. Now, imagine that when you pay taxes to the local government, part of your tax money goes into an investment fund, which takes participations in local startups, active in the domain on those experimental solutions and new technologies. Your tax money goes into a portfolio of investments.

 

Imagine even more. There is local crowdfunding platform, similar to Kickstarter or StartEngine, where you can put your money directly into those local ventures, without passing by the local investment fund as a middleman. On that crowdfunding platform, the same local investment fund can compete for funding with other ventures. A cryptocurrency, internal to that crowdfunding platform, could be used to make clearer financial rules in the investment game.

 

When I filed that idea for review, in the form of an article, with a Polish scientific journal, I received back an interestingly critical review. There were two main lines of criticism. Firstly, where is the advantage of my proposed solution over the presently applied institutional schemes? How could my solution improve smart urban development, as compared to what local governments currently do? Secondly, doesn’t it go too far from the mission of local governments? Doesn’t my scheme push public goods too far into private hands and doesn’t it make local governments too capitalistic?

 

I need to address those questions, both for revising my article, and for giving a nice closure to this particular, educational story in the fundamentals of finance. Functionality first, thus: what is the point? What can be possibly improved with that financial scheme I propose? Finance has two essential functions: it meets the need for liquidity, and, through the mechanism of financial markets. Liquidity is the capacity to enter in transactions. For any given situation there is a total set T of transactions that an entity, finding themselves in this situation, could be willing to enter into. Usually, we can’t enter it all, I mean we, entities. Individuals, businesses, governments: we are limited in our capacity to enter transactions. For the given total set T of transactions, there is just a subset Ti that i-th entity can participate in. The fraction « Ti/T » is a measure of liquidity this entity has.

 

Question: if, instead of doing something administratively, or granting a simple subsidy to a private agent, local governments act as investment funds in local projects, how does it change their liquidity, and the liquidity of local communities they are the governments of? I went to the website of the Polish Central Statistical Office, there I took slightly North-East and landed in their Local Data Bank. I asked around for data regarding the financial stance of big cities in Poland, and I found out some about: Wroclaw, Lodz, Krakow, Gdansk, Kielce, and Poznan. I focused on the investment outlays of local governments, the number of new business entities registered every year, per 10 000 residents, and on population. Here below, you can find three summary tables regarding these metrics. You will see by yourself, but in a bird’s eye view, we have more or less stationary populations, and local governments spending a shrinking part of their total budgets on fixed local assets. Local governments back off from financing those assets. In the same time, there is growing stir in business. There are more and more new business entities registered every year, in relation to population. Those local governments look as if they were out of ideas as for how to work with that local business. Can my idea change the situation? I develop on this one further below those two tables.

 

 

The share of investment outlays in the total expenditures of the city council, in major Polish cities
  City
Year Wroclaw Lodz Krakow Gdansk Kielce Poznan Warsaw
2008 31,8% 21,0% 19,7% 22,6% 15,3% 27,9% 19,8%
2009 34,6% 23,5% 20,4% 20,6% 18,6% 28,4% 17,8%
2010 24,2% 15,2% 16,7% 24,5% 21,2% 29,6% 21,4%
2011 20,3% 12,5% 14,5% 33,9% 26,9% 30,1% 17,1%
2012 21,5% 15,3% 12,6% 38,2% 21,9% 20,8% 16,8%
2013 15,0% 19,3% 11,0% 28,4% 18,5% 18,1% 15,0%
2014 15,6% 24,4% 16,4% 27,0% 18,6% 11,8% 17,5%
2015 18,4% 26,8% 13,7% 21,3% 23,8% 24,1% 10,2%
2016 13,3% 14,3% 11,5% 15,2% 10,7% 17,5% 9,0%
2017 11,7% 10,2% 11,5% 12,2% 14,1% 12,3% 12,0%
               
Delta 2017 – 2008 -20,1% -10,8% -8,2% -10,4% -1,2% -15,6% -7,8%

 

 

Population of major cities
  City
Year Wroclaw Lodz Krakow Gdansk Kielce Poznan Warsaw
2008 632 162 747 152 754 624 455 581 205 094 557 264 1 709 781
2009 632 146 742 387 755 000 456 591 204 835 554 221 1 714 446
2010 630 691 730 633 757 740 460 509 202 450 555 614 1 700 112
2011 631 235 725 055 759 137 460 517 201 815 553 564 1 708 491
2012 631 188 718 960 758 334 460 427 200 938 550 742 1 715 517
2013 632 067 711 332 758 992 461 531 199 870 548 028 1 724 404
2014 634 487 706 004 761 873 461 489 198 857 545 680 1 735 442
2015 635 759 700 982 761 069 462 249 198 046 542 348 1 744 351
2016 637 683 696 503 765 320 463 754 197 704 540 372 1 753 977
2017 638 586 690 422 767 348 464 254 196 804 538 633 1 764 615
               
Delta 2017 – 2008 6 424 (56 730) 12 724 8 673 (8 290) (18 631) 54 834

 

Number of newly registered business entities per 10 000 residents, in major Polish cities
  City
Year Wroclaw Lodz Krakow Gdansk Kielce Poznan Warsaw
2008 190 160 200 190 140 210 200
2009 195 167 205 196 149 216 207
2010 219 193 241 213 182 238 274
2011 221 169 204 195 168 244 249
2012 228 187 230 201 168 255 274
2013 237 187 224 211 175 262 307
2014 236 189 216 217 157 267 303
2015 252 183 248 236 185 283 348
2016 265 186 251 238 176 270 364
2017 272 189 257 255 175 267 345
               
Delta 2017 – 2008 82,00 29,00 57,00 65,00 35,00 57,00 145,00

 

Let’s take two cases from the table: my hometown Krakow, and my capital Warsaw. In the former case, the negative gap in the investment outlays of the local government is – 44 mlns of zlotys – some €10 mln – and in the latter case it is minus 248,46 millions of zlotys, thus about €56,5 mln. If we want to really get after new technologies in cities, we need to top up those gaps, possibly with a surplus. How can my idea help to save the day?

 

When I try to spend €10 mln euro more on the urban fixed assets, I need to have all those €10 mln. I need to own them directly, in my balance sheet, before spending them. On the other hand, when I want to create an investment fund, which would take part in local startups, and by their intermediary would make those €10 mln worth of assets to happen in real life, I need much less. I start with the balance sheet directly attached to those assets: €10 mln in fixed assets = equity of the startup(s) + liabilities of the startup(s). Now, equity of the startup(s) = shares of our investment fund + shares of other partners. At the end of the day, the local government could finance assets of €10 mln with 1 or 2 millions of euro of own equity, maybe even less.

 

From there on, it went sort of out of hand. I have that mental fixation on things connected to artificial intelligence and neural networks. You can find the latest account in English in the update entitled « What are the practical outcomes of those hypotheses being true or false? ». If you speak French, there is a bit more, and more recent, in « Surpopulation sauvage ou compétition aux États-Unis ». Anyway, I did it. I made a neural network in order to simulate the behaviour of my financial concept. Below, I am presenting a graphical idea of that network. It combines a strictly spoken multilayer perceptron with components of deep learning: observation of the fitness function, and the feeding back of it, as well as selection and preference regarding different neural outputs of the network. I am using that neural network as a simulator of collective intelligence.

 

So, as I am assuming that we are collectively intelligent in our local communities, I make the following logical structure. Step 1: I take four input variables, as listed below. They are taken from real statistics about those 7 big Polish cities, named above – Wroclaw, Lodz, Krakow, Gdansk, Kielce, Poznan, Warsaw – over the period from 2008 through 2017.

 

Input variable 1: Investment outlays of the local government [mln]

Input variable 2: Overall expenses of the local government [mln]

Input variable 3: Population [headcount]

Input variable 4: Number of new business entities registered annually [coefficient]

 

In step 2, I attach to those real input variables an Output variable – Hypothetical variable: capital engaged in the local governments investment fund, initially calculated as if 5% of new business entities were financed with €100 000 each. I calculate the average value of that variable across the whole sample of 7 cities, and it makes €87 mln as expected value. This is the amount of money the average city among those seven could put in that local investment fund to support local startups and their projects of smart urban development.

 

In step 3, I run my neural network through the empirical data, and then I make it do additional 5000 experimental rounds, just to make it look for a match between the input variables – which can change as they want – and the output variable, which I have almost pegged at €87 mln. I say ‘almost’, as in practice the network will generate a bit of wobbling around those €87 mln. I want to see what possible configurations of the input variables can arise, through different patterns of collective learning, around that virtually pegged value of the output variable.

 

I hypothesise 5 different ways of learning, or 5 different selections in that Neuron 4 you can see in the picture above. Learning pattern #1 consists in systematically preferring the neural output of the sigmoid neural function. It is a type of function, which systematically calms down any shocks and sudden swings in input phenomena. It is like a collective pretention that whatever kind of s**t is really going on, everything is just fine. Learning pattern #2 prefers the output of the hyperbolic tangent function. This one tends to be honest, and when there is a shock, it yields a shock, without any f**kery about it. It is like a market with clear rules of competition. Learning pattern #3 takes the least error of the two functions. It is a most classical approach in neural networks. The closer I get to the expected value, the better I am learning, that sort of things. Learning pattern #4 makes an average of those two functions. The greatest value among those being averaged has the greatest impact on the resulting average. Thus, the average of two functions is like hierarchy of importance, expressed in one number. Finally, learning pattern #5 takes that average, just as #3, but it adds the component of growing resistance to new information. At each experimental round, it divides the value of the error fed back into the network by the consecutive number of the round. Error generated in round 2 gets divided by 2, and that generated in round 4000 is being divided by 4000 etc. This is like a person who, as they process new information, develops a growing sentiment of being fully schooled on the topic, and is more and more resistant to new input.

 

In the table below, I present the results of those simulations. Learning patterns #2 and #4 develop structures somehow more modest than the actual reality, expressed as empirical averages in the first numerical line of the table. These are urban communities, where that investment fund I am thinking about slightly grows in importance, in relation to the whole municipal budget. Learning patterns #1 and #3 develop crazy magnitudes in those input variables. Populations grow 9 or 10 times bigger than the present ones, the probability of having new businesses in each 10 000 people grows 6 or 7 times, and municipal budgets swell by 14 ÷ 15 times. The urban investment fund becomes close to insignificant. Learning pattern #5 goes sort of in the middle between those extremes.

 

 

  Input variable 1 Input variable 2 Input variable 3 Input variable 4 Output variable
Initial averages of empirical values  €177 mln  €996 mln                     721 083                               223  €87 mln
Type of selection in neural output Sample results of simulation with the neural network
Sigmoid preferred €2 440 mln €14 377 mln 7 093 526,21 1 328,83 €87 mln
Hyperbolic Tangent preferred €145 mln €908 mln 501 150,03 237,78 €87 mln
Least error preferred €2 213 mln €13 128 mln 6 573 058,50 1 490,28 €87 mln
Average of the two errors €122 mln €770 mln 432 702,57 223,66 €87 mln
Average of the two errors, with growing resistance to learning €845 mln €5 043 mln 2 555 800,36 661,61 €87 mln

 

What is the moral of the fairy tale? As I see it now, it means that for any given initial situation as for that financial scheme I have in mind for cities and their local governments, future development can go two opposite ways. The city can get sort of slightly smaller and smarter, with more or less the same occurrence of new businesses emerging every year. It happens when the local community learns, as a collective intelligence, with little shielding from external shocks. This is like a market-oriented city. In terms of quantitative dynamics, it makes me think about cities like Vienna (Austria), Lyon (France), or my home city, Krakow (Poland). On the other hand, the city can shield itself somehow against socio-economic shocks, for example with heavy subsidies, and then it gets out of control. It grows big like hell, and business starts just to pop around.

 

At the first sight, it seems counterintuitive. We associate market-based, open-to-shocks solutions with uncontrolled growth, and interventionist, counter-cyclical policies with sort of a tame status quo. Still, cities are strange beasts. They are like crocodiles. When you make them compete for food and territory, they grow just to a certain size, ‘cause when they grow bigger than that, they die. Yet, when you allow a crocodile to live in a place without much competition, and plenty of food around, it grows to enormous proportions.

 

My temporary conclusion is that my idea of a local investment fund to boost smart change in cities is workable, i.e. has the chances to thrive as a financial mechanism, when the whole city is open to market-based solutions and receives little shielding from economic shocks.

 

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Surpopulation sauvage ou compétition aux États-Unis

 

Mon éditorial sur You Tube

 

Me revoilà à vous casser les pieds avec les réseaux neuronaux et leur application dans les sciences sociales. Je reste essentiellement dans l’univers de l’efficience énergétique. J’enchaîne donc sur « Tenez-vous bien, chers Autrichiens ». Cette fois, je m’y prends à la plus grande économie du monde, celle des États-Unis, et leur efficience énergétique. Les Américains aiment la compétition et c’est bien que j’inclue dans le réseau neuronal : la compétition entre neurones.

Comme d’habitude, c’est un peu de chaos, dans mon cheminement intellectuel. Hier, j’ai eu cette idée : suivant la structure logique de mon perceptron, j’obtiens des schémas de comportement différents de sa part. Et si je mettais ces comportements en compétition de quelque sorte ? Et si je construisais un perceptron qui organise cette compétition ?

Je veux donc mettre mes neurones en compétition entre eux. Je veux dire, pas mes neurones à moi, strictement parlé, juste les neurones de mon perceptron. J’avais commencé à mapper cette idée dans un fichier Excel, un peu à tâtons. Tout d’abord, les acteurs. Qui est en compétition contre qui ? Jusqu’alors, j’ai identifié deux grandes distinctions fonctionnelles parmi les comportements possibles de mon perceptron. Premièrement, c’est la distinction entre deux fonctions d’activation de base : la fonction sigmoïde d’une part et la tangente hyperbolique d’autre part. Deuxièmement, j’ai observé une différence prononcée lorsque j’introduis dans mon perceptron l’auto observation (rétropropagation) de la cohésion, comprise comme la distance Euclidienne entre les variables. Je construis donc quatre neurones d’activation, comme des combinaisons de ces possibilités :

 

Le neurone de sortie no.1 : le sigmoïde de base

Le neurone de sortie no.2 : la tangente hyperbolique de base

Le neurone de sortie no.3 : sigmoïde avec observation de la cohésion

Le neurone de sortie no.4 : la tangente hyperbolique avec observation de la cohésion

 

Les neurones de sortie 1 et 2 sont connectés au sentier synaptique suivant : absorption des variables d’entrée dans la première couche (neurone d’observation), suivie par l’attribution des coefficients aléatoires de pondération à chaque variable et la sommation des variables d’entrée ainsi pondérées (neurone de pondération) dans la couche cachée. Le neurone de pondération crache la moyenne pondérée des variables d’entrée, et on la met comme argument dans la fonction d’activation neurale (neurone de traitement) et celle-ci crache un résultat. Le neurone d’observation compare ce résultat à la valeur espérée de la variable de résultat, une mesure d’erreur locale est prise et le perceptron ajoute cette erreur aux valeurs des variables d’entrée dans la ronde suivante d’expérimentation.

 

Les neurones de sortie 3 et 4 impliquent un sentier synaptique un peu plus complexe. Tout d’abord, dans la couche d’entrée du perceptron, j’ajoute un neurone parallèle à celui qui génère les coefficients aléatoires. Ce deuxième neurone s’active après la première ronde d’expérimentation. Lorsque celle-ci à pris lieu, ce neurone calcule les distances Euclidiennes entre les variables, il rend une distance moyenne pour chaque variable séparément, ainsi que la distance moyenne entre toutes les variables. En fait, c’est un job pour tout une séquence synaptique à part, mais pour simplifier je décris ça comme un neurone. Je vois sa place dans la couche d’entrée puisque sa fonction essentielle est celle d’observation. Observation complexe, certes, observation quand même. J’appelle ce neurone « neurone de cohésion perçue » et j’assume qu’il a besoin d’un partenaire dans la couche cachée, donc d’un neurone qui combine la perception de cohésion avec le signal nerveux qui vient de la perception des variables d’entrée en tant que telles. Je baptise ce deuxième neurone caché « neurone de pondération par cohésion ».

 

Bon, j’en étais à la compétition. Compétition veut dire sélection. Lorsque j’entre en compétition contre entité A, cela implique l’existence d’au moins une entité B qui va choisir entre moi et A.

Le choix peut être digital ou analogue. Le choix digital c’est 0 ou 1, avec rien au milieu. B va choisir moi ou bien A et le vainqueur prend toute la cagnotte. Le choix analogue laisse B faire un panier des participations respectives de moi et de A. B peut prendre comme 60% de moi et 40% de A, par exemple. Comme je veux introduire dans mon perceptron la composante de compétition, j’ajoute une couche neuronale supplémentaire, avec un seul neurone pour commencer, le neurone de sélection. Ce neurone reçoit les signaux de la part des 4 neurones de sortie et fait son choix.

 

Important : c’est un jeu lourd en conséquences. La sélection faite dans une ronde d’expérimentation détermine la valeur d’erreur qui est propagée dans la ronde suivante dans tous les quatre neurones de sortie. Le résultat de compétition à un moment donné détermine les conditions de compétition aux moments ultérieurs.

 

Le signal nerveux envoyé par les neurones de sortie c’est l’erreur locale d’estimation. Le neurone de sélection fait son choix entre quatre erreurs. Plus loin, je discute les résultats des différentes stratégies de ce choix. Pour le moment, je veux montrer le contexte empirique de ces stratégies.  Ci-dessous, j’introduis deux graphes qui montrent l’erreur générée par les quatre neurones de sortie au tout début du processus d’apprentissage, respectivement dans les 20 premières rondes d’expérimentation et dans les 100 premières rondes. Vous pouvez voir que la ligne noire sur les deux graphes, qui représente l’erreur crachée par le neurone de sortie no. 4, donc par la tangente hyperbolique avec observation de la cohésion, est de loin la plus grande et la plus variable. Celle générée par le sigmoïde avec observation de la cohésion est substantiellement moindre, mais elle reste bien au-dessus des erreurs qui viennent des neurones de sortie no. 1 et 2, donc ceux qui s’en foutent de la cohésion interne du perceptron.

Je me demande qu’est-ce que l’apprentissage, au juste ? Les deux graphes montrent trois façons d’apprendre radicalement différentes l’une de l’autre. Laquelle est la meilleure ? Quelle est la fonction éducative de ces erreurs ? Lorsque je fais des essais où je me goure juste un tout petit peu, donc lorsque j’opère comme les neurones de sortie 1 et 2, je suis exact et précis, mais je m’aventure très près de mon point d’origine. J’accumule peu d’expérience nouvelle, en fait. En revanche, si mes erreurs se balancent dans les valeurs comme celles montrées par la ligne noire, donc par la tangente hyperbolique avec observation de la cohésion, j’ai peu de précision mais j’accumule beaucoup plus de mémoire expérimentale.

 

Quatre stratégies de sélection se dessinent, équivalentes à trois types de compétition entre les neurones de sortie. Sélection façon 1 : le neurone de sélection choisit le neurone de sortie qui génère la moindre erreur des quatre. D’habitude, c’est le neurone qui produit la tangente hyperbolique sans observation de la cohésion. C’est une compétition où le plus précis et le plus prévisible gagne à chaque fois. Sélection façon 2 : c’est le neurone qui génère l’erreur la plus grande qui a l’honneur de propager son erreur dans les générations suivantes du perceptron. Normalement c’est le neurone de sortie no.4 : la tangente hyperbolique avec observation de la cohésion. Sélection no. 3 : Le neurone de sélection tire la moyenne arithmétique des quatre erreurs fournies par les quatre neurones de sortie. Logiquement, le neurone de sortie qui génère l’erreur la plus grande va dominer. Cette sélection est donc une représentation mathématique de hiérarchie entre les neurones de sortie.

 

Finalement, la compétition conditionnelle à une condition prédéfinie. Je prends le mode de sélection no. 2, donc je choisis l’erreur la plus grande des quatre et je la compare à un critère. Disons me j’espère que le perceptron génère une erreur plus grande que la croissance moyenne annuelle standardisée de l’efficience énergétique du pays en question. Dans le cas des États-Unis cette valeur-jauge est de 0,014113509. Si un neurone quelconque de sortie (soyons honnêtes, ce sera la tangente hyperbolique qui observe sa propre cohésion) génère une erreur supérieure à 0,014113509, cette erreur est propagée dans la prochaine ronde d’expérimentation. Sinon, l’erreur à propager est 0. C’est donc une condition où je dis à mon perceptron : soit tu apprends vite et bien, soit tu n’apprends pas du tout.

 

Bon, passons aux actes. Voilà, ci-dessous, la liste de mes variables.

 

Code de la variable Description de la variable
Q/E PIB par kg d’équivalent pétrole d’énergie consommé (prix constants, 2011 PPP $) – VARIABLE DE RÉSULTAT

 

CK/PA Capital immobilisé moyen par une demande nationale de brevet (millions de 2011 PPP $, prix constants)

 

A/Q Amortissement agrégé d’actifs fixes comme % du PIB

 

PA/N Demandes nationales de brevet par 1 million d’habitants

 

M/Q Offre agrégée d’argent comme % du PIB

 

E/N Consommation finale d’énergie en kilogrammes d’équivalent pétrole par tête d’habitant

 

RE/E Consommation d’énergie renouvelable comme % de la consommation totale d’énergie

 

U/N Population urbaine comme % de la population totale

 

Q Produit Intérieur Brut (millions de 2011 PPP $, prix constants)

 

Q/N PIB par tête d’habitant (2011 PPP $, prix constants)

 

N Population

 

 

Je prends ces variables et je les mets dans mon perceptron enrichi avec ce neurone de sélection. Je simule quatre cas alternatifs de sélection, comme discutés plus haut. Voilà, dans le prochain tableau ci-dessous, les résultats de travail de mon perceptron après 5000 rondes d’apprentissage. Remarquez, pour la stratégie de sélection avec condition prédéfinie, les 5000 rondes tournent en à peine 72 rondes, puisque toutes les autres rendent erreur 0.

 

Valeur 1990 Valeur 2014 Sélection de la moindre erreur Sélection de l’erreur la plus grande Moyenne des erreurs – hiérarchie Compétition conditionnelle au seuil prédéfini
Q/E $                   4,83 $                         7,46 $                         9,39 $                       38,18 $                       34,51 $                       34,16
CK/PA 291,84 185,38 263,29 1 428,92 1 280,59 1 266,39
A/Q 14,5% 15,0% 19,0% 79,1% 71,4% 70,7%
PA/N 358,49 892,46 1 126,42 4 626,32 4 180,92 4 138,29
M/Q 71,0% 90,1% 113,6% 464,7% 420,0% 415,8%
E/N 7 671,77 6 917,43 8 994,27 40 063,47 36 109,61 35 731,13
RE/E 4,2% 8,9% 11,2% 45,6% 41,3% 40,8%
U/N 75,3% 81,4% 102,4% 416,5% 376,6% 372,7%
Q 9 203 227,00 16 704 698,00 21 010 718,45 85 428 045,72 77 230 318,76 76 445 587,64
Q/N $ 36 398,29 $ 52 292,28 $  65 771,82 $ 267 423,42 $ 241 761,31 $ 239 304,79
N 252 847 810 319 448 634 401 793 873 1 633 664 524 1 476 897 088 1 461 890 454

 

Oui, juste la sélection no. 1 semble être raisonnable. Les autres stratégies de compétition rendent des valeurs absurdement élevées. Quoi que là, il faut se souvenir du truc essentiel à propos d’un réseau neuronal artificiel : c’est une structure logique, pas organique. Structure logique veut dire un ensemble de proportions. Je transforme donc ces valeurs absolues rendues par mon perceptron en des proportions par rapport à la valeur de la variable de résultat. La variable de résultat Q/E est donc égale à 1 et les valeurs des variables d’entrée {CK/PA ; A/Q ; PA/N ; M/Q ; E/N ; RE/E ; U/N ; Q ; Q/N ; N} sont exprimées comme des multiples de 1. Je montre les résultats d’une telle dénomination dans le tableau suivant, ci-dessous. Comment les lire ? Eh bien, si vous lisez A/Q = 0,02024, cela veut dire que l’amortissement agrégé d’actifs fixes pris comme pourcentage du PIB est égal à la fraction 0,02024 du coefficient Q/E etc. Chaque colonne de ce tableau de valeurs indexées représente une structure définie par des proportions par rapport à Q/E. Vous pouvez remarquer que pris sous cet angle, ces résultats de simulation du réseau neuronal ne sont plus aussi absurdes. Comme ensembles de proportions, ce sont des structures tout à fait répétitives. La différence c’est la valeur-ancre, donc efficience énergétique. Intuitivement, j’y vois des scénarios différents d’efficience énergétique des États-Unis en cas ou la société américaine doit s’adapter à des niveaux différents de surpopulation et cette surpopulation est soit gentille (sélection de la moindre erreur) soit sauvage (toutes les autres sélections).

 

 

  Valeurs indexées sur la variable de résulat
Valeur 1990 Valeur 2014 Sélection de la moindre erreur Sélection de l’erreur la plus grande Moyenne des erreurs – hiérarchie Compétition conditionnelle au seuil prédéfini
Q/E 1,00000 1,00000 1,00000 1,00000 1,00000 1,00000
CK/PA 60,41009 24,83314 28,13989 37,41720 37,12584 37,08887
A/Q 0,03005 0,02007 0,02024 0,02071 0,02070 0,02070
PA/N 74,20624 119,55525 119,98332 121,18428 121,14657 121,14178
M/Q 0,14703 0,12070 0,12097 0,12173 0,12171 0,12171
E/N 1 588,03883 926,66575 958,89830 1 049,32878 1 046,48878 1 046,12839
RE/E 0,00864 0,01194 0,01194 0,01196 0,01196 0,01196
U/N 0,15587 0,10911 0,10911 0,10911 0,10911 0,10911
Q 1 905 046,16228 2 237 779,02450 2 237 779,02450 2 237 779,02450 2 237 779,02450 2 237 779,02450
Q/N 7 534,35896 7 005,12942 7 005,12942 7 005,12942 7 005,12942 7 005,12942
N 52 338 897,00657 42 793 677,11833 42 793 677,11833 42 793 677,11833 42 793 677,11833 42 793 677,11833

 

Je continue à vous fournir de la bonne science, presque neuve, juste un peu cabossée dans le processus de conception. Je vous rappelle que vous pouvez télécharger le business plan du projet BeFund (aussi accessible en version anglaise). Vous pouvez aussi télécharger mon livre intitulé “Capitalism and Political Power”. Je veux utiliser le financement participatif pour me donner une assise financière dans cet effort. Vous pouvez soutenir financièrement ma recherche, selon votre meilleur jugement, à travers mon compte PayPal. Vous pouvez aussi vous enregistrer comme mon patron sur mon compte Patreon . Si vous en faites ainsi, je vous serai reconnaissant pour m’indiquer deux trucs importants : quel genre de récompense attendez-vous en échange du patronage et quelles étapes souhaitiez-vous voir dans mon travail ?