A civilisation of droplets

I am getting into the groove of a new form of expression: the rubber duck. I explained more specifically the theory of the rubber duck in the update entitled A test pitch of my ‘Energy Ponds’ business concept. I use mostly videos, where, as I am talking to an imaginary audience, I sharpen (hopefully) my own ideas and the way of getting them across. In this update, I am cutting out some slack from my thinking about the phenomenon of collective intelligence, and my use of neural networks to simulate the way that human, collective intelligence works (yes, it works).

The structure of my updates on this blog changes as my form is changing. Instead of placing the link to my video in like the first subheading of the update, I place it further blow, sort of in conclusion. I prepare my updates with an extensive use of Power Point, in order both to practice a different way of formulating my ideas, and in order to have slides for my video presentation. Together with the link to You Tube, you will find another one, to the Power Point document.

Ad rem, i.e. get the hell to the point, man. I am trying to understand better my own thinking about collective intelligence and the bridging towards artificial intelligence. As I meditate about it, I find an essential phenomenological notion: the droplet of information. With the development of digital technologies, we communicate more and more with some sort of pre-packaged, whoever-is-interested-can-pick-it-up information. Videos on you tube, blogging updates, books, articles are excellent examples thereof. When I talk to the camera of my computer, I am both creating a logical structure for myself, and a droplet of information for other people.

Communication by droplets is fundamentally different from other forms, like meetings, conversations, letters etc. Until recently, and by ‘recently’ I mean like 1990, most organized human structures worked with precisely addressed information. Since we started to grow the online part of our civilization, we have been coordinating more and more with droplet information. It is as if information was working like a hormone. As You Tube swells, we have more and more of that logical hormone accumulated in our civilization.

That’s precisely my point of connection with artificial intelligence. When I observe the way a neural network works (yes, I observe them working step by step, iteration by iteration, as strange as it might seem), I see a structure which uses error as food for learning. Residual local error is to a neural network what, once again, a hormone is to a living organism.

 Under the two links below, you will find:

  1. The Power Point Presentation with slides that accompany the YT video

That would be all for now. If you want to contact me directly, you can mail at: goodscience@discoversocialsciences.com .

The collective archetype of striking good deals in exports

My editorial on You Tube

I keep philosophizing about the current situation, and I try to coin up a story in my mind, a story meaningful enough to carry me through the weeks and months to come. I try to figure out a strategy for future investment, and, in order to do that, I am doing that thing called ‘strategic assessment of the market’.

Now, seriously, I am profiting from that moment of forced reclusion (in Poland we have just had compulsory sheltering at home introduced, as law) to work a bit on my science, more specifically on the application of artificial neural networks to simulate collective intelligence in human societies. As I have been sending around draft papers on the topic, to various scientific journals (here you have a sample of what I wrote on the topic << click this link to retrieve a draft paper of mine), I have encountered something like a pretty uniform logic of constructive criticism. One of the main lines of reasoning in that logic goes like: ‘Man, it is interesting what you write. Yet, it would be equally interesting to explain what you mean exactly by collective intelligence. How does it or doesn’t it rhyme with individual intelligence? How does it connect with culture?’.

Good question, truly a good one. It is the question that I have been asking myself for months, since I discovered my fascination with the way that simple neural networks work. At the time, I observed intelligent behaviour in a set of four equations, put back to back in a looping sequence, and it was a ground-breaking experience for me. As I am trying to answer this question, my intuitive path is that of distinction between collective intelligence and the individual one. Once again (see The games we play with what has no brains at all ), I go back to William James’s ‘Essays in Radical Empiricism’, and to his take on the relation between reality and our mind. In Essay I, entitled ‘Does Consciousness Exist?’, he goes: “My thesis is that if we start with the supposition that there is only one primal stuff or material in the world, a stuff of which everything is composed, and if we call that stuff ‘pure experience,’ then knowing can easily be explained as a particular sort of relation towards one another into which portions of pure experience may enter. The relation itself is a part of pure experience; one of its ‘terms’ becomes the subject or bearer of the knowledge, the knower, the other becomes the object known. […] Just so, I maintain, does a given undivided portion of experience, taken in one context of associates, play the part of a knower, of a state of mind, of ‘consciousness’; while in a different context the same undivided bit of experience plays the part of a thing known, of an objective ‘content.’ In a word, in one group it figures as a thought, in another group as a thing. And, since it can figure in both groups simultaneously, we have every right to speak of it as subjective and objective both at once.”

Here it is, my distinction. Right, it is partly William James’s distinction. Anyway, individual intelligence is almost entirely mediated by conscious experience of reality, which is representation thereof, not reality as such. Individual intelligence is based on individual representation of reality. By opposition, my take on collective intelligence is based on the theory of adaptive walk in rugged landscape, a theory used both in evolutionary biology and in the programming of artificial intelligence. I define collective intelligence as the capacity to run constant experimentation across many social entities (persons, groups, cultures, technologies etc.), as regards the capacity of those entities to achieve a vector of desired social outcomes.

The expression ‘vector of desired social outcomes’ sounds as something invented by a philosopher and mathematician, together, after a strong intake of strong spirits. I am supposed to be simple in getting my ideas across, and thus I am translating that expression into something simpler. As individuals, we are after something. We have values that we pursue, and that pursuit helps us making it through each consecutive day. Now, there is a question: do we have collective values that we pursue as a society? Interesting question. Bernard Bosanquet, the British philosopher who wrote ‘The Philosophical Theory of The State[1], claimed very sharply that individual desires and values hardly translate into collective, state-wide values and goals to pursue. He claimed that entire societies are fundamentally unable to want anything, they can just be objectively after something. The collective being after something is essentially non-emotional and non-intentional. It is something like a collective archetype, occurring at the individual level somewhere below the level of consciousness, in the collective unconscious, which mediates between conscious individual intelligence and the external stuff of reality, to use William James’ expression.

How to figure out what outcomes are we after, as a society? This is precisely, for the time being, the central axis of my research involving neural networks. I take a set of empirical observations about a society, e.g. a set of country-year observation of 30 countries across 40 quantitative variables. Those empirical observations are the closest I can get to the stuff of reality. I make a simple neural network supposed to simulate the way a society works. The simpler this network is, the better. Each additional component of complexity requires making ever strengthening assumptions about the way societies works. I use that network as a simple robot. I tell the robot: ‘Take one variable from among those 40 in the source dataset. Make it your output variable, i.e. the desired outcome of collective existence. Treat the remaining 39 variables as input, instrumental to achieving that outcome’.  I make 40 such robots, and each of them produces a set of numbers, which is like a mutation of the original empirical dataset, and I can assess the similarity between each such mutation and the source empirical stuff. I do it by calculating the Euclidean distance between vectors of mean values, respectively in each such clone and the original data. Other methods can be used, e.g. kernel functions.

I worked that method through with various empirical datasets, and my preferred one, for now, is Penn Tables 9.1. (Feenstra et al. 2015[2]), which is a pretty comprehensive overview of macroeconomic variables across the planetary board. The detailed results of my research vary, depending on the exact set of variables I take into account, and on the set of observations I select, still there is a tentative conclusion that emerges: as a set of national societies, living in separate countries on that crazy piece of rock, speeding through cosmic space with no roof whatsoever, just with air condition on, we are mostly after terms of trade, and about the way we work, we prepare for work, and the way we remunerate work. Numerical robots which I program to optimize variables such as average price in exports, the share of labour compensation in Gross National Income, the average number of hours worked per year per person, or the number of years spent in education before starting professional activity: all these tend to win the race for similarity to the source empirical data. These seem to be the desired outcomes that our human collective intelligence seems to be after.

Is it of any help regarding the present tough s**t we are waist deep in? If my intuitions are true, whatever we will do regarding the COVID-19 pandemic, will be based on an evolutionary, adaptive choice. Path #1 consists in collectively optimizing those outcomes, whilst trying to deal with the pandemic, and dealing with the pandemic will be instrumental to, for example, the deals we strike in international trade, and to the average number of hours worked per person per year. An alternative Path #2 means to reshuffle our priorities completely and reorganize so as to pursue completely different goals. Which one are we going to take? Good question, very much about guessing rather than forecasting. Historical facts indicate that so far, as a civilization, we have been rather slow out of the gate. Change in collectively pursued values had occurred slowly, progressively, at the pace of generations rather than press conferences.  

In parallel to doing research on collective intelligence, I am working on a business plan for the project I named ‘Energy Ponds’ (see, for example: Bloody hard to make a strategy). I have done some market research down this specific avenue of my intellectual walk, and here below I am giving a raw account of progress therein.

The study of market environment for the Energy Ponds project is pegged on one central characteristic of the technology, which will be eventually developed: the amount of electricity possible to produce in the structure based on ram pumps and relatively small hydroelectric turbines. Will this amount be sufficient just to supply energy to a small neighbouring community or will it be enough to be sold in wholesale amounts via auctions and deals with grid operators. In other words, is Energy Ponds a viable concept just for the off-grid installations or is it scalable up to facility size?

There are examples of small hydropower installations, which connect to big power grids in order to exploit incidental price opportunities (Kusakana 2019[3]).

That basic question kept in mind, it is worth studying both the off-grid market for hydroelectricity, as well as the wholesale, on-grid market. Market research for Energy Ponds starts, in the first subsection below, with a general, global take on the geographical distribution of the main factors, both environmental and socio-economic. The next sections study characteristic types of markets

Overview of environmental and socio-economic factors 

Quantitative investigation starts with the identification of countries, where hydrological conditions are favourable to implementation of Energy Ponds, namely where significant water stress is accompanied by relatively abundant precipitations. More specifically, this stage of analysis comprises two steps. In the first place, countries with significant water stress are identified[4], and then each of them is checked as for the amount of precipitations[5], hence the amount of rainwater possible to collect.

Two remarks are worth formulating at this point. Firstly, in the case of big countries, such as China or United States, covering both swamps and deserts, the target locations for Energy Ponds would be rather regions than countries as a whole. Secondly, and maybe a bit counterintuitively, water stress is not a strict function of precipitations. When studied in 2014, with the above-referenced data from the World Bank, water stress is Pearson-correlated with precipitations just at r = -0,257817141.

Water stress and precipitations have very different distributions across the set of countries reported in the World Bank’s database. Water stress strongly varies across space, and displays a variability (i.e. quotient of its standard deviation divided by its mean value) of v = 3,36. Precipitations are distributed much more evenly, with a variability of v = 0,68. With that in mind, further categorization of countries as potential markets for the implementation of Energy Ponds has been conducted with the assumption that significant water stress is above the median value observed, thus above 14,306296%. As for precipitations, a cautious assumption, prone to subsequent revision, is that sufficient rainfall for sustaining a structure such as Energy Ponds is above the residual difference between mean rainfall observed and its standard deviation, thus above 366,38 mm per year.      

That first selection led to focusing further analysis on 40 countries, namely: Kenya, Haiti, Maldives, Mauritania, Portugal, Thailand, Greece, Denmark, Netherlands, Puerto Rico, Estonia, United States, France, Czech Republic, Mexico, Zimbabwe, Philippines, Mauritius, Turkey, Japan, China, Singapore, Lebanon, Sri Lanka, Cyprus, Poland, Bulgaria, Germany, South Africa, Dominican Republic, Kyrgyz Republic, Malta, India, Italy, Spain, Azerbaijan, Belgium, Korea, Rep., Armenia, Tajikistan.

Further investigation focused on describing those 40 countries from the standpoint of the essential benefits inherent to the concept of Energy Ponds: prevention of droughts and floods on the one hand, with the production of electricity being the other positive outcome. The variable published by the World Bank under the heading of ‘Droughts, floods, extreme temperatures (% of population, average 1990-2009)[6] has been taken individually, and interpolated with the headcount of population. In the first case, the relative importance of extreme weather phenomena for local populations is measured. When recalculated into the national headcount of people touched by extreme weather, this metric highlights the geographical distribution of the aggregate benefits, possibly derived from adaptive resilience vis a vis such events.

Below, both metrics, i.e. the percentage and the headcount of population, are shown as maps. The percentage of population touched by extreme weather conditions is much more evenly distributed than its absolute headcount. In general, Asian countries seem to absorb most of the adverse outcomes resulting from climate change. Outside Asia, and, of course, within the initially selected set of 40 countries, Kenya seems to be the most exposed.    


Another possible take on the socio-economic environment for developing Energy Ponds is the strictly business one. Prices of electricity, together with the sheer quantity of electricity consumed are the chief coordinates in this approach. Prices of electricity have been reported as retail prices for households, as Energy Ponds are very likely to be an off-grid local supplier. Sources of information used in this case are varied: EUROSTAT data has been used as regards prices in European countries[1] and they are generally relevant for 2019. For other countries sites such as STATISTA or www.globalpetrolprices.com have been used, and most of them are relevant for 2018. These prices are national averages across different types of contracts.

The size of electricity markets has been measured in two steps, starting with consumption of electricity per capita, as published by the World Bank[2], which has been multiplied by the headcount of population. Figures below give a graphical idea of the results. In general, there seems to be a trade-off between price and quantity, almost as in the classical demand function. The biggest markets of electricity, such as China or the United States, display relatively low prices. Markets with high prices are comparatively much smaller in terms of quantity. An interesting insight has been found, when prices of electricity have been compared with the percentage of population with access to electricity, as published by the World Bank[3]. Such a comparison, shown in Further below, we can see interesting outliers: Haiti, Kenya, India, and Zimbabwe. These are countries burdened with significant limitations as regards access to electricity. In these locations, projects such as Energy Ponds can possibly produce entirely new energy sources for local populations. 

The possible implementation of Energy Ponds can take place in very different socio-economic environments. It is worth studying those environments as idiosyncratic types. Further below, the following types and cases are studied more in detail:

  1. Type ‘Large cheap market with a lot of environmental outcomes’: China, India >> low price of electricity, locally access to electricity, prevention of droughts and floods,
  • Type ‘Small or medium-sized, developed European economy with high prices of electricity and relatively small a market’
  • Special case: United States ‘Large, moderately priced market, with moderate environmental outcomes’: United States >> moderate price of electricity, possibility to go off grid with Energy Ponds, prevention of droughts and floods 
  • Special case: Kenya > quite low access to electricity (63%) and moderately high retail price of electricity (0,22/ kWh), big population affected by droughts and floods, Energy Ponds can increase access to electricity

Table 1, further below, exemplifies the basic metrics of a hypothetical installation of Energy Ponds, in specific locations representative for the above-mentioned types and special cases. These metrics are:

  1. Discharge (of water) in m3 per second, in selected riverain locations. Each type among those above is illustrated with a few specific, actual geographical spots. The central assumption at this stage is that a local installation of Energy Ponds abstracts 20% of the flow per second in the river. Of course, should a given location be selected for more in-depth a study, specific hydrological conditions have to be taken into account, and the 20%-assumption might be verified upwards or downwards.
  • Electric power to expect with the given abstraction of water. That power has been calculated with the assumption that an average ram pump can create elevation, thus hydraulic head, of about 20 metres. There are more powerful ram pumps (see for example: https://www.allspeeds.co.uk/hydraulic-ram-pump/ ), yet 20 metres is a safely achievable head to assume without precise knowledge of environmental conditions in the given location. Given that 20-meter head, the basic equation to calculate electric power in watts is:
  • [Flow per second, in m3, calculated as 20% of abstraction from the local river]

x

20 [head in meters, by ram pumping]

x

9,81 [Newtonian acceleration]

x

75% [average efficiency of hydroelectric turbines]

  • Financial results to expect from the sales of electricity. Those results are calculated on the basis of two empirical variables: the retail price of electricity, referenced as mentioned earlier in this chapter, and the LCOE (Levelized Cost Of Energy). The latter is sourced from a report by the International Renewable Energy Agency (IRENA 2019[1]), and provisionally pegged at $0,05 per kWh. This is a global average and in this context it plays the role of simplifying assumption, which, in turn, allows direct comparison of various socio-economic contexts. Of course, each specific location for Energy Ponds bears a specific LCOE, in the phase of implementation. With those two source variables, two financial metrics are calculated:
    • Revenues from the sales of electricity, as: [Electric power in kilowatts] x [8760 hours in a year] x [Local retail price for households per 1 kWh]
    • Margin generated over the LCOE, equal to: [Electric power in kilowatts] x [8760 hours in a year] x {[Retail price for households per 1 kWh] – $0,05}

Table 1

Country Location (Flow per second, with 20% abstraction from the river)   Electric power generated with 20% of abstraction from the river (Energy for sale) Annual revenue (Annual margin over LCOE)  
China Near Xiamen,  Jiulong River (26 636,23 m3 /s)   783,9 kW (6 867 006,38 kWh a year)   $549 360,51 ($206 010,19)
China   Near Changde, Yangtze River (2400 m3/s)     353,16 kW (3 093 681,60 kWh a year)     $247 494,53 ($92 810,45
India   North of Rajahmundry, Godavari River (701 m3/s)   103,15 kW (903 612,83 kWh a year) $54 216,77 ($9 036,13) 
India   Ganges River near Patna (2400 m3/s)   353,16 kW (3 093 681,60 kWh a year) $185 620,90  ($30 936,82)
Portugal Near Lisbon, Tagus river (100 m3/s)   14,72 kW (128 903,40 kWh a year)   € 27 765,79 (€22 029,59)
Germany   Elbe river between Magdeburg and Dresden (174 m3/s)   25,6 kW (224 291,92 kWh a year) €68 252,03 (€58 271,04)
  Poland   Vistula between Krakow and Sandomierz (89,8 m3/s)     13,21 kW (115 755,25 kWh a year)   € 18 234,93 (€13 083,82)
France   Rhone river, south of Lyon (3400 m3/s)   500,31 kW   (4 382 715,60 kWh a year)  € 773 549,30  (€ 582 901,17)
United States, California   San Joaquin River (28,8 m3/s)   4,238 kW (37 124,18 kWh a year) $ 7 387,71 ($5 531,50)
United States, Texas   Colorado River, near Barton Creek (100 m3/s)   14,72 kW (128 903,40 kWh a year) $14 643,43 ($8 198,26)
United States, South Carolina   Tennessee River, near Florence (399 m3/s)   58,8 kW   (515 097,99 kWh a year)    $66 499,15  ($40 744,25)
Kenya   Nile River, by the Lake Victoria (400 m3/s)   58,86 kW (515 613,6 kWh a year)  $113 435  ($87 654,31)
Kenya Tana River, near Kibusu (81 m3/s)   11,92 kW (104 411,75 kWh a year)   $22 970,59 ($17 750)

China and India are grouped in the same category for two reasons. Firstly, because of the proportion between the size of markets for electricity, and the pricing thereof. These are huge markets in terms of quantity, yet very frugal in terms of price per 1 kWh. Secondly, these two countries seem to be representing the bulk of populations, globally observed as touched damage from droughts and floods. Should the implementation of Energy Ponds be successful in these countries, i.e. should water management significantly improve as a result, environmental benefits would play a significant socio-economic role.

With those similarities to keep in mind, China and India display significant differences as for both the environmental conditions, and the economic context. China hosts powerful rivers, with very high flow per second. This creates an opportunity, and a challenge. The amount of water possible to abstract from those rivers through ram pumping, and the corresponding electric power possible to generate are the opportunity. Yet, ram pumps, as they are manufactured now, are mostly small-scale equipment. Creating ram-pumping systems able to abstract significant amounts of water from Chinese rivers, in the Energy Ponds scheme, is a technological challenge in itself, which would require specific R&D work.

That said, China is already implementing a nation-wide programme of water management, called ‘Sponge Cities’, which shows some affinity to the Energy Ponds concept. Water management in relatively small, network-like structures, seems to have a favourable economic and political climate in China, and that climate translates into billions of dollars in investment capital.

India is different in these respects. Indian rivers, at least in floodplains, where Energy Ponds can be located, are relatively slow, in terms of flow per second, as compared to China. Whilst Energy Ponds are easier to implement technologically in such conditions, the corresponding amount of electricity is modest. India seems to be driven towards financing projects of water management as big dams, or as local preservation of wetlands. Nothing like the Chinese ‘Sponge Cities’ programme seems to be emerging, to the author’s best knowledge.

European countries form quite a homogenous class of possible locations for Energy Ponds. Retail prices of electricity for households are generally high, whilst the river system is dense and quite disparate in terms of flow per second. In the case of most European rivers, flow per second is low or moderate, still the biggest rivers, such as Rhine or Rhone, offer technological challenges similar to those in China, in terms of required volume in ram pumping.

As regards the Energy Ponds business concept, the United States seem to be a market on their own right. Local populations are exposed to moderate (although growing) an impact of droughts and floods, whilst they consume big amounts of electricity, both in aggregate, and per capita. Retail prices of electricity for households are noticeable disparate from state to state, although generally lower than those practiced in Europe[2]. Prices range from less than $0,1 per 1 kWh in Louisiana, Arkansas or Washington, up to $0,21 in Connecticut. It is to note that with respect to prices of electricity, the state of Hawaii stands out, with more than $0,3 per 1 kWh.

The United States offer quite a favourable environment for private investment in renewable sources of energy, still largely devoid of systematic public incentives. It is a market of multiple, different ecosystems, and all ranges of flow in local rivers.    


[1] IRENA (2019), Renewable Power Generation Costs in 2018, International Renewable Energy Agency, Abu Dhabi. ISBN 978-92-9260-126-3

[2] https://www.electricchoice.com/electricity-prices-by-state/ last access March 6th, 2020


[1] https://ec.europa.eu/eurostat/

[2] https://data.worldbank.org/indicator/EG.USE.ELEC.KH.PC

[3] https://data.worldbank.org/indicator/EG.ELC.ACCS.ZS


[1] Bosanquet, B. (1920). The philosophical theory of the state (Vol. 5). Macmillan and Company, limited.

[2] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at http://www.ggdc.net/pwt

[3] Kusakana, K. (2019). Optimal electricity cost minimization of a grid-interactive Pumped Hydro Storage using ground water in a dynamic electricity pricing environment. Energy Reports, 5, 159-169.

[4] Level of water stress: freshwater withdrawal as a proportion of available freshwater resources >> https://data.worldbank.org/indicator/ER.H2O.FWST.ZS

[5] Average precipitation in depth (mm per year) >> https://data.worldbank.org/indicator/AG.LND.PRCP.MM

[6] https://data.worldbank.org/indicator/EN.CLC.MDAT.ZS

A few more insights about collective intelligence

My editorial on You Tube

I noticed it is one month that I did not post anything on my blog. Well, been doing things, you know. Been writing, and thinking by the same occasion. I am forming a BIG question in my mind, a question I want to answer: how are we going to respond to climate change? Among all the possible scenarios of such response, which are we the most likely to follow? When I have a look, every now and then, at Greta Thunberg’s astonishingly quick social ascent, I wonder why are we so divided about something apparently so simple? I am very clear: this is not a rhetorical question from my part. Maybe I should claim something like: ‘We just need to get all together, hold our hands and do X, Y, Z…’. Yes, in a perfect world we would do that. Still, in the world we actually live in, we don’t. Does it mean we are collectively stupid, like baseline, and just some enlightened individuals can sometimes see the truly rational path of moving ahead? Might be. Yet, another view is possible. We might be doing apparently dumb things locally, and those apparent local flops could sum up to something quite sensible at the aggregate scale.

There is some science behind that intuition, and some very provisional observations. I finally (and hopefully) nailed down the revision of the article on energy efficiency. I have already started developing on this one in my last update, entitled ‘Knowledge and Skills’, and now, it is done. I have just revised the article, quite deeply, and by the same occasion, I hatched a methodological paper, which I submitted to MethodsX. As I want to develop a broader discussion on these two papers, without repeating their contents, I invite my readers to get acquainted with their PDF, via the archives of my blog. Thus, by clicking the title Energy Efficiency as Manifestation of Collective Intelligence in Human Societies, you can access the subject matter paper on energy efficiency, and clicking on Neural Networks As Representation of Collective Intelligence will take you to the methodological article. 

I think I know how to represent, plausibly, collective intelligence with artificial intelligence. I am showing the essential concept in the picture below. Thus, I start with a set of empirical data, describing a society. Well in the lines of what I have been writing, on this blog, since early spring this year, I assume that quantitative variables in my dataset, e.g. GDP per capita, schooling indicators, the probability for an average person to become a mad scientist etc. What is the meaning of those variables? Most of all, they exist and change together. Banal, but true. In other words, all that stuff represents the cumulative outcome of past, collective action and decision-making.

I decided to use the intellectual momentum, and I used the same method with a different dataset, and a different set of social phenomena. I took Penn Tables 9.1 (Feenstra et al. 2015[1]), thus a well-known base of macroeconomic data, and I followed the path sketched in the picture below.


Long story short, I have two big surprises. When I look upon energy efficiency and its determinants, turns out energy efficiency is not really the chief outcome pursued by the 59 societies studied: they care much more about the local, temporary proportions between capital immobilised in fixed assets, and the number of resident patent applications. More specifically, they seem to be principally optimizing the coefficient of fixed assets per 1 patent application. That is quite surprising. It sends me back to my peregrinations through the land of evolutionary theory (see for example: My most fundamental piece of theory).

When I take a look at the collective intelligence (possibly) embodied in Penn Tables 9.1, I can see this particular collective wit aiming at optimizing the share of labour in the proceeds from selling real output in the first place. Then, almost immediately after, comes the average number of hours worked per person per year. You can click on this link and read the full manuscript I have just submitted with the Quarterly Journal of Economics.

Wrapping it (provisionally) up, as I did some social science with the assumption of collective intelligence in human societies taken at the level of methodology, and I got truly surprising results. That thing about energy efficiency – i.e. the fact that when in presence of some capital in fixed assets, and some R&D embodied in patentable inventions, we seem caring about energy efficiency only secondarily – is really mind-blowing. I had already done some research on energy as factor of social change, and, whilst I have never been really optimistic about our collective capacity to save energy, I assumed that we orient ourselves, collectively, on some kind of energy balance. Apparently, we do only when we have nothing else to pay attention to. On the other hand, the collective focus on macroeconomic variables pertinent to labour, rather than prices and quantities, is just as gob-smacking. All economic education, when you start with Adam Smith and take it from there, assumes that economic equilibriums, i.e. those special states of society when we are sort of in balance among many forces at work, are built around prices and quantities. Still, in that research I have just completed, the only kind of price my neural network can build a plausibly acceptable learning around, is the average price level in international trade, i.e. in exports, and in imports. All the prices, which I have been taught, and which I taught are the cornerstones of economic equilibrium, like prices in consumption or prices in investment, when I peg them as output variables of my perceptron, the incriminated perceptron goes dumb like hell and yields negative economic aggregates. Yes, babe: when I make my neural network pay attention to price level in investment goods, it comes to the conclusion that the best idea is to have negative national income, and negative population.  

Returning to the issue of climate change and our collective response to it, I am trying to connect my essential dots. I have just served some like well-cooked science, and not it is time to bite into some raw one. I am biting into facts which I cannot explain yet, like not at all. Did you know, for example, that there are more and more adult people dying in high-income countries, like per 1000, since 2014? You can consult the data available with World Bank, as regards the mortality of men and that in women. Infant mortality is generally falling, just as adult mortality in low, and middle-income countries. It is just about adult people in wealthy societies categorized as ‘high income’: there are more and more of them dying per 1000. Well, I should maybe say ‘more of us’, as I am 51, and relatively well-off, thank you. Anyway, all the way up through 2014, adult mortality in high-income countries had been consistently subsiding, reaching its minimum in 2014 at 57,5 per 1000 in women, and 103,8 in men. In 2016, it went up to 60,5 per 1000 in women, and 107,5 in men. It seems counter-intuitive. High-income countries are the place where adults are technically exposed to the least fatal hazards. We have virtually no wars around high income, we have food in abundance, we enjoy reasonably good healthcare systems, so WTF? As regards low-income countries, we could claim that adults who die are relatively the least fit for survival ones, but what do you want to be fit for in high-income places? Driving a Mercedes around? Why it started to revert since 2014?

Intriguingly, high income countries are also those, where the difference in adult mortality between men and women is the most pronounced, in men almost the double of what is observable in women. Once again, it is something counter-intuitive. In low-income countries, men are more exposed to death in battle, or to extreme conditions, like work in mines. Still, in high-income countries, such hazards are remote. Once again, WTF? Someone could say: it is about natural selection, about eliminating the weak genetics. Could be, and yet not quite. Elimination of weak genetics takes place mostly through infant mortality. Once we make it like through the first 5 years of our existence, the riskiest part is over. Adult mortality is mostly about recycling used organic material (i.e. our bodies). Are human societies in high-income countries increasing the pace of that recycling? Why since 2015? Is it more urgent to recycle used men than used women?

There is one thing about 2015, precisely connected to climate change. As I browsed some literature about droughts in Europe and their possible impact on agriculture (see for example All hope is not lost: the countryside is still exposed), it turned out that 2015 was precisely the year when we started to sort of officially admitting that we have a problem with agricultural droughts on our continent. Even more interestingly, 2014 and 2015 seem to have been the turning point when aggregate damages from floods, in Europe, started to curb down after something like two decades of progressive increase. We swapped one calamity for another one, and starting from then, we started to recycle used adults at more rapid a pace. Of course, most of Europe belongs to the category of high-income countries.

See? That’s what I call raw science about collective intelligence. Observation with a lot of questions and very remote idea as for the method of answering them. Something is apparently happening, maybe we are collectively intelligent in the process, and yet we don’t know how exactly (are we collectively intelligent). It is possible that we are not. Warmer climate is associated with greater prevalence of infectious diseases in adults (Amuakwa-Mensah et al. 2017[1]), for example, and yet it does not explain why is greater adult mortality happening in high-income countries. Intuitively, infections attack where people are poorly shielded against them, thus in countries with frequent incidence of malnutrition and poor sanitation, thus in the low-income ones.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Amuakwa-Mensah, F., Marbuah, G., & Mubanga, M. (2017). Climate variability and infectious diseases nexus: Evidence from Sweden. Infectious Disease Modelling, 2(2), 203-217.

[1] Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer (2015), “The Next Generation of the Penn World Table” American Economic Review, 105(10), 3150-3182, available for download at www.ggdc.net/pwt

Sketching quickly alternative states of nature

My editorial on You Tube

I am thinking about a few things, as usually, and, as usually, it is a laborious process. The first one is a big one: what the hell am I doing what I am doing for? I mean, what’s the purpose and the point of applying artificial intelligence to simulating collective intelligence? There is one particular issue that I am entertaining in this regard: the experimental check. A neural network can help me in formulating very precise hypotheses as for how a given social structure can behave. Yet, these are hypotheses. How can I have them checked?

Here is an example. Together with a friend, we are doing some research about the socio-economic development of big cities in Poland, in the perspective of seeing them turning into so-called ‘smart cities’. We came to an interesting set of hypotheses generated by a neural network, but we have a tiny little problem: we propose, in the article, a financial scheme for cities but we don’t quite understand why we propose this exact scheme. I know it sounds idiotic, but well: it is what it is. We have an idea, and we don’t know exactly where that idea came from.

I have already discussed the idea in itself on my blog, in « Locally smart. Case study in finance.» : a local investment fund, created by the local government, to finance local startup businesses. Business means investment, especially at the aggregate scale and in the long run. This is how business works: I invest, and I have (hopefully) a return on my investment. If there is more and more private business popping up in those big Polish cities, and, in the same time, local governments are backing off from investment in fixed assets, let’s make those business people channel capital towards the same type of investment that local governments are withdrawing from. What we need is an institutional scheme where local governments financially fuel local startup businesses, and those businesses implement investment projects.

I am going to try and deconstruct the concept, sort of backwards. I am sketching the landscape, i.e. the piece of empirical research that brought us to formulating the whole idea of investment fund paired with crowdfunding.  Big Polish cities show an interesting pattern of change: local populations, whilst largely stagnating demographically, are becoming more and more entrepreneurial, which is observable as an increasing number of startup businesses per 10 000 inhabitants. On the other hand, local governments (city councils) are spending a consistently decreasing share of their budgets on infrastructural investment. There is more and more business going on per capita, and, in the same time, local councils seem to be slowly backing off from investment in infrastructure. The cities we studied as for this phenomenon are: Wroclaw, Lodz, Krakow, Gdansk, Kielce, Poznan, Warsaw.

More specifically, the concept tested through the neural network consists in selecting, each year, 5% of the most promising local startups, and funds each of them with €80 000. The logic behind this concept is that when a phenomenon becomes more and more frequent – and this is the case of startups in big Polish cities – an interesting strategy is to fish out, consistently, the ‘crème de la crème’ from among those frequent occurrences. It is as if we were soccer promotors in a country, where more and more young people start playing at a competitive level. A viable strategy consists, in such a case, in selecting, over and over again, the most promising players from the top of the heap and promote them further.

Thus, in that hypothetical scheme, the local investment fund selects and supports the most promising from amongst the local startups. Mind you, that 5% rate of selection is just an idea. It could be 7% or 3% just as well. A number had to be picked, in order to simulate the whole thing with a neural network, which I present further. The 5% rate can be seen as an intuitive transference from the s-Student significance test in statistics. When you test a correlation for its significance, with the t-Student test, you commonly assume that at least 95% of all the observations under scrutiny is covered by that correlation, and you can tolerate a 5% outlier of fringe cases. I suppose this is why we picked, intuitively, that 5% rate of selection among the local startups: 5% sounds just about right to delineate the subset of most original ideas.

Anyway, the basic idea consists in creating a local investment fund controlled by the local government, and this fund would provide a standard capital injection of €80 000 to 5% of most promising local startups. The absolute number STF (i.e. financed startups) those 5% translate into can be calculated as: STF = 5% * (N/10 000) * ST10 000, where N is the population of the given city, and ST10 000 is the coefficient of startup businesses per 10 000 inhabitants. Just to give you an idea what it looks like empirically, I am presenting data for Krakow (KR, my hometown) and Warsaw (WA, Polish capital), in 2008 and 2017, which I designate, respectively, as STF(city_acronym; 2008) and STF(city_acronym; 2017). It goes like:

STF(KR; 2008) = 5% * (754 624/ 10 000) * 200 = 755

STF(KR; 2017) = 5* * (767 348/ 10 000) * 257 = 986

STF(WA; 2008) = 5% * (1709781/ 10 000) * 200 = 1 710

STF(WA; 2017) = 5% * (1764615/ 10 000) * 345 = 3 044   

That glimpse of empirics allows guessing why we applied a neural network to that whole thing: the two core variables, namely population and the coefficient of startups per 10 000 people, can change with a lot of autonomy vis a vis each other. In the whole sample that we used for basic stochastic analysis, thus 7 cities from 2008 through 2017 equals 70 observations, those two variables are Pearson-correlated at r = 0,6267. There is some significant correlation, and yet some 38% of observable variance in each of those variables doesn’t give a f**k about the variance of the other variable. The covariance of these two seems to be dominated by the variability in population rather than by uncertainty as for the average number of startups per 10 000 people.

What we have is quite predictable a trend of growing propensity to entrepreneurship, combined with a bit of randomness in demographics. Those two can come in various duos, and their duos tend to be actually trios, ‘cause we have that other thing, which I already mentioned: investment outlays of local governments and the share of those outlays in the overall local budgets. Our (my friend’s and mine) intuitive take on that picture was that it is really interesting to know the different ways those Polish cities can go in the future, rather that setting one central model. I mean, the central stochastic model is interesting too. It says, for example, that the natural logarithm of the number of startups per 10 000 inhabitants, whilst being negatively correlated with the share of investment outlays in the local government’s budget, it is positively correlated with the absolute amount of those outlays. The more a local government spends on fixed assets, the more startups it can expect per 10 000 inhabitants. That latter variable is subject to some kind of scale effects from the part of the former. Interesting. I like scale effects. They are intriguing. They show phenomena, which change in a way akin to what happens when I heat up a pot full of water: the more heat have I supplied to water, the more different kinds of stuff can happen. We call it increase in the number of degrees of freedom.

The stochastically approached degrees of freedom in the coefficient of startups per 10 000 inhabitants, you can see them in Table 1, below. The ‘Ln’ prefix means, of course, natural logarithms. Further below, I return to the topic of collective intelligence in this specific context, and to using artificial intelligence to simulate the thing.

Table 1

Explained variable: Ln(number of startups per 10 000 inhabitants) R2 = 0,608 N = 70
Explanatory variable Coefficient of regression Standard error Significance level
Ln(investment outlays of the local government) -0,093 0,048 p = 0,054
Ln(total budget of the local government) 0,565 0,083 p < 0,001
Ln(population) -0,328 0,09 p < 0,001
Constant    -0,741 0,631 p = 0,245

I take the correlations from Table 1, thus the coefficients of regression from the first numerical column, and I check their credentials with the significance level from the last numerical column. As I want to understand them as real, actual things that happen in the cities studied, I recreate the real values. We are talking about coefficients of startups per 10 000 people, comprised somewhere the observable minimum ST10 000 = 140, and the maximum equal to ST10 000 = 345, with a mean at ST10 000 = 223. It terms of natural logarithms, that world folds into something between ln(140) = 4,941642423 and ln(345) = 5,843544417, with the expected mean at ln(223) = 5,407171771. Standard deviation Ω from that mean can be reconstructed from the standard error, which is calculated as s = Ω/√N, and, consequently, Ω = s*√N. In this case, with N = 70, standard deviation Ω = 0,631*√70 = 5,279324767.  

That regression is interesting to the extent that it leads to an absurd prediction. If the population of a city shrinks asymptotically down to zero, and if, in the same time, the budget of the local government swells up to infinity, the occurrence of entrepreneurial behaviour (number of startups per 10 000 inhabitants) will tend towards infinity as well. There is that nagging question, how the hell can the budget of a local government expand when its tax base – the population – is collapsing. I am an economist and I am supposed to answer questions like that.

Before being an economist, I am a scientist. I ask embarrassing questions and then I have to invent a way to give an answer. Those stochastic results I have just presented make me think of somehow haphazard a set of correlations. Such correlations can be called dynamic, and this, in turn, makes me think about the swarm theory and collective intelligence (see Yang et al. 2013[1] or What are the practical outcomes of those hypotheses being true or false?). A social structure, for example that of a city, can be seen as a community of agents reactive to some systemic factors, similarly to ants or bees being reactive to pheromones they produce and dump into their social space. Ants and bees are amazingly intelligent collectively, whilst, let’s face it, they are bloody stupid singlehandedly. Ever seen a bee trying to figure things out in the presence of a window? Well, not only can a swarm of bees get that s**t down easily, but also, they can invent a way of nesting in and exploiting the whereabouts of the window. The thing is that a bee has its nervous system programmed to behave smartly mostly in social interactions with other bees.

I have already developed on the topic of money and capital being a systemic factor akin to a pheromone (see Technological change as monetary a phenomenon). Now, I am walking down this avenue again. What if city dwellers react, through entrepreneurial behaviour – or the lack thereof – to a certain concentration of budgetary spending from the local government? What if the budgetary money has two chemical hooks on it – one hook observable as ‘current spending’ and the other signalling ‘investment’ – and what if the reaction of inhabitants depends on the kind of hook switched on, in the given million of euros (or rather Polish zlotys, or PLN, as we are talking about Polish cities)?

I am returning, for a moment, to the negative correlation between the headcount of population, on the one hand, and the occurrence of new businesses per 10 000 inhabitants. Cities – at least those 7 Polish cities that me and my friend did our research on – are finite spaces. Less people in the city means less people per 1 km2 and vice versa. Hence, the occurrence of entrepreneurial behaviour is negatively correlated with the density of population. A behavioural pattern emerges. The residents of big cities in Poland develop entrepreneurial behaviour in response to greater a concentration of current budgetary spending by local governments, and to lower a density of population. On the other hand, greater a density of population or less money spent as current payments from the local budget act as inhibitors of entrepreneurship. Mind you, greater a density of population means greater a need for infrastructure – yes, those humans tend to crap and charge their smartphones all over the place – whence greater a pressure on the local governments to spend money in the form of investment in fixed assets, whence the secondary in its force, negative correlation between entrepreneurial behaviour and investment outlays from local budgets.

This is a general, behavioural hypothesis. Now, the cognitive challenge consists in translating the general idea into as precise empirical hypotheses as possible. What precise states of nature can happen in those cities? This is when artificial intelligence – a neural network – can serve, and this is when I finally understand where that idea of investment fund had come from. A neural network is good at producing plausible combinations of values in a pre-defined set of variables, and this is what we need if we want to formulate precise hypotheses. Still, a neural network is made for learning. If I want the thing to make those hypotheses for me, I need to give it a purpose, i.e. a variable to optimize, and learn as it is optimizing.

In social sciences, entrepreneurial behaviour is assumed to be a good thing. When people recurrently start new businesses, they are in a generally go-getting frame of mind, and this carries over into social activism, into the formation of institutions etc. In an initial outburst of neophyte enthusiasm, I might program my neural network so as to optimize the coefficient of startups per 10 000 inhabitants. There is a catch, though. When I tell a neural network to optimize a variable, it takes the most likely value of that variable, thus, stochastically, its arithmetical average, and it keeps recombining all the other variables so as to have this one nailed down, as close to that most likely value as possible. Therefore, if I want a neural network to imagine relatively high occurrences of entrepreneurial behaviour, I shouldn’t set said behaviour as the outcome variable. I should mix it with others, as an input variable. It is very human, by the way. You brace for achieving a goal, you struggle the s**t out of yourself, and you discover, with negative amazement, that instead of moving forward, you are actually repeating the same existential pattern over and over again. You can set your personal compass, though, on just doing a good job and having fun with it, and then, something strange happens. Things get done sort of you haven’t even noticed when and how. Goals get nailed down even without being phrased explicitly as goals. And you are having fun with the whole thing, i.e. with life.

Same for artificial intelligence, as it is, as a matter of fact, an artful expression of our own, human intelligence: it produces the most interesting combinations of variables as a by-product of optimizing something boring. Thus, I want my neural network to optimize on something not-necessarily-fascinating and see what it can do in terms of people and their behaviour. Here comes the idea of an investment fund. As I have been racking my brains in the search of place where that idea had come from, I finally understood: an investment fund is both an institutional scheme, and a metaphor. As a metaphor, it allows decomposing an aggregate stream of investment into a set of more or less autonomous projects, and decisions attached thereto. An investment fund is a set of decisions coordinated in a dynamically correlated manner: yes, there are ways and patterns to those decisions, but there is a lot of autonomous figuring-out-the-thing in each individual case.

Thus, if I want to put functionally together those two social phenomena – investment channelled by local governments and entrepreneurial behaviour in local population – an investment fund is a good institutional vessel to that purpose. Local government invests in some assets, and local homo sapiens do the same in the form of startups. What if we mix them together? What if the institutional scheme known as public-private partnership becomes something practiced serially, as a local market for ideas and projects?

When we were designing that financial scheme for local governments, me and my friend had the idea of dropping a bit of crowdfunding into the cooking pot, and, as strange as it could seem, we are bit confused as for where this idea came from. Why did we think about crowdfunding? If I want to understand how a piece of artificial intelligence simulates collective intelligence in a social structure, I need to understand what kind of logical connections had I projected into the neural network. Crowdfunding is sort of spontaneous. When I am having a look at the typical conditions proposed by businesses crowdfunded at Kickstarter or at StartEngine, these are shitty contracts, with all the due respect. Having a Master’s in law, when I look at the contracts offered to investors in those schemes, I wouldn’t sign such a contract if I had any room for negotiation. I wouldn’t even sign a contract the way I am supposed to sign it via a crowdfunding platform.

There is quite a strong piece of legal and business science to claim that crowdfunding contracts are a serious disruption to the established contractual patterns (Savelyev 2017[2]). Crowdfunding largely rests on the so-called smart contracts, i.e. agreements written and signed as software on Blockchain-based platforms. Those contracts are unusually flexible, as each amendment, would it be general or specific, can be hash-coded into the history of the individual contractual relation. That puts a large part of legal science on its head. The basic intuition of any trained lawyer is that we negotiate the s**t of ourselves before the signature of the contract, thus before the formulation of general principles, and anything that happens later is just secondary. With smart contracts, we are pretty relaxed when it comes to setting the basic skeleton of the contract. We just put the big bones in, and expect we gonna make up the more sophisticated stuff as we go along.

With the abundant usage of smart contracts, crowdfunding platforms have peculiar legal flexibility. Today you sign up for having a discount of 10% on one Flower Turbine, in exchange of £400 in capital crowdfunded via a smart contract. Next week, you learn that you can turn your 10% discount on one turbine into 7% on two turbines if you drop just £100 more into that pig coin. Already the first step (£400 against the discount of 10%) would be a bit hard to squeeze into classical contractual arrangements as for investing into the equity of a business, let alone the subsequent amendment (Armour, Enriques 2018[3]).

Yet, with a smart contract on a crowdfunding platform, anything is just a few clicks away, and, as astonishing as it could seem, the whole thing works. The click-based smart contracts are actually enforced and respected. People do sign those contracts, and moreover, when I mentally step out of my academic lawyer’s shoes, I admit being tempted to sign such a contract too. There is a specific behavioural pattern attached to crowdfunding, something like the Russian ‘Davaj, riebiata!’ (‘Давай, ребята!’ in the original spelling). ‘Let’s do it together! Now!’, that sort of thing. It is almost as I were giving someone the power of attorney to be entrepreneurial on my behalf. If people in big Polish cities found more and more startups, per 10 000 residents, it is a more and more recurrent manifestation of entrepreneurial behaviour, and crowdfunding touches the very heart of entrepreneurial behaviour (Agrawal et al. 2014[4]). It is entrepreneurship broken into small, tradable units. The whole concept we invented is generally placed in the European context, and in Europe crowdfunding is way below the popularity it has reached in North America (Rupeika-Aboga, Danovi 2015[5]). As a matter of fact, European entrepreneurs seem to consider crowdfunding as really a secondary source of financing.

Time to sum up a bit all those loose thoughts. Using a neural network to simulate collective behaviour of human societies involves a few deep principles, and a few tricks. When I study a social structure with classical stochastic tools and I encounter strange, apparently paradoxical correlations between phenomena, artificial intelligence may serve. My intuitive guess is that a neural network can help in clarifying what is sometimes called ‘background correlations’ or ‘transitive correlations’: variable A is correlated with variable C through the intermediary of variable B, i.e. A is significantly correlated with B, and B is significantly correlated with C, but the correlation between A and C remains insignificant.

When I started to use a neural network in my research, I realized how important it is to formulate very precise and complex hypotheses rather than definitive answers. Artificial intelligence allows to sketch quickly alternative states of nature, by gazillions. For a moment, I am leaving the topic of those financial solutions for cities, and I return to my research on energy, more specifically on energy efficiency. In a draft article I wrote last autumn, I started to study the relative impact of the velocity of money, as well as that of the speed of technological change, upon the energy efficiency of national economies. Initially, I approached the thing in the nicely and classically stochastic a way. I came up with conclusions of the type: ‘variance in the supply of money makes 7% of the observable variance in energy efficiency, and the correlation is robust’. Good, this is a step forward. Still, in practical terms, what does it give? Does it mean that we need to add money to the system in order to have greater an energy efficiency? Might well be the case, only you don’t add money to the system just like that, ‘cause most of said money is account money on current bank accounts, and the current balances of those accounts reflect the settlement of obligations resulting from complex private contracts. There is no government that could possibly add more complex contracts to the system.

Thus, stochastic results, whilst looking and sounding serious and scientific, have remote connexion to practical applications. On the other hand, if I take the same empirical data and feed it into a neural network, I get alternative states of nature, and those states are bloody interesting. Artificial intelligence can show me, for example, what happens to energy efficiency if a social system is more or less conservative in its experimenting with itself. In short, artificial intelligence allows super-fast simulation of social experiments, and that simulation is theoretically robust.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] Yang, X. S., Cui, Z., Xiao, R., Gandomi, A. H., & Karamanoglu, M. (2013). Swarm intelligence and bio-inspired computation: theory and applications.

[2] Savelyev, A. (2017). Contract law 2.0:‘Smart’contracts as the beginning of the end of classic contract law. Information & Communications Technology Law, 26(2), 116-134.

[3] Armour, J., & Enriques, L. (2018). The promise and perils of crowdfunding: Between corporate finance and consumer contracts. The Modern Law Review, 81(1), 51-84.

[4] Agrawal, A., Catalini, C., & Goldfarb, A. (2014). Some simple economics of crowdfunding. Innovation Policy and the Economy, 14(1), 63-97

[5] Rupeika-Apoga, R., & Danovi, A. (2015). Availability of alternative financial resources for SMEs as a critical part of the entrepreneurial eco-system: Latvia and Italy. Procedia Economics and Finance, 33, 200-210.

Lean, climbing trends

My editorial on You Tube

Our artificial intelligence: the working title of my research, for now. Volume 1: Energy and technological change. I am doing a little bit of rummaging in available data, just to make sure I keep contact with reality. Here comes a metric: access to electricity in the world, measured as the % of total human population[1]. The trend line looks proudly ascending. In 2016, 87,38% of mankind had at least one electric socket in their place. Ten years earlier, by the end of 2006, they were 81,2%. Optimistic. Looks like something growing almost linearly. Another one: « Electric power transmission and distribution losses »[2]. This one looks different: instead of a clear trend, I observe something shaking and oscillating, with the width of variance narrowing gently down, as time passes. By the end of 2014 (last data point in this dataset), we were globally at 8,25% of electricity lost in transmission. The lowest coefficient of loss occurred in 1998: 7,13%.

I move from distribution to production of electricity, and to its percentage supplied from nuclear power plants[3]. Still another shape, that of a steep bell with surprisingly lean edges. Initially, it was around 2% of global electricity supplied by the nuclear. At the peak of fascination, it was 17,6%, and at the end of 2014, we went down to 10,6%. The thing seems to be temporarily stable at this level. As I move to water, and to the percentage of electricity derived from the hydro[4], I see another type of change: a deeply serrated, generally descending trend. In 1971, we had 20,2% of our total global electricity from the hydro, and by the end of 2014, we were at 16,24%. In the meantime, it looked like a rollercoaster. Yet, as I am having a look at other renewables (i.e. other than hydroelectricity) and their share in the total supply of electricity[5], the shape of the corresponding curve looks like a snake, trying to figure something out about a vertical wall. Between 1971 and 1988, the share of those other renewables in the total electricity supplied moved from 0,25% to 0,6%. Starting from 1989, it is an almost perfectly exponential growth, to reach 6,77% in 2015. 

Just to have a complete picture, I shift slightly, from electricity to energy consumption as a whole, and I check the global share of renewables therein[6]. Surprise! This curve does not behave at all as it is expected to behave, after having seen the previously cited share of renewables in electricity. Instead of a snake sniffing a wall, we can see a snake like from above, or something like e meandering river. This seems to be a cycle over some 25 years (could it be Kondratiev’s?), with a peak around 18% of renewables in the total consumption of energy, and a trough somewhere by 16,9%. Right now, we seem to be close to the peak. 

I am having a look at the big, ugly brother of hydro: the oil, gas and coal sources of electricity and their share in the total amount of electricity produced[7]. Here, I observe a different shape of change. Between 1971 and 1986, the fossils dropped their share from 62% to 51,47%. Then, it rockets up back to 62% in 1990. Later, a slowly ascending trend starts, just to reach a peak, and oscillate for a while around some 65 ÷ 67% between 2007 and 2011. Since then, the fossils are dropping again: the short-term trend is descending.  

Finally, one of the basic metrics I have been using frequently in my research on energy: the final consumption thereof, per capita, measured in kilograms of oil equivalent[8]. Here, we are back in the world of relatively clear trends. This one is ascending, with some bumps on the way, though. In 1971, we were at 1336,2 koe per person per year. In 2014, it was 1920,655 koe.

Thus, what are all those curves telling me? I can see three clearly different patterns. The first is the ascending trend, observable in the access to electricity, in the consumption of energy per capita, and, since the late 1980ies, in the share of electricity derived from renewable sources. The second is a cyclical variation: share of renewables in the overall consumption of energy, to some extent the relative importance of hydroelectricity, as well as that of the nuclear. Finally, I can observe a descending trend in the relative importance of the nuclear since 1988, as well as in some episodes from the life of hydroelectricity, coal and oil.

On the top of that, I can distinguish different patterns in, respectively, the production of energy, on the one hand, and its consumption, on the other hand. The former seems to change along relatively predictable, long-term paths. The latter looks like a set of parallel, and partly independent experiments with different sources of energy. We are collectively intelligent: I deeply believe that. I mean, I hope. If bees and ants can be collectively smarter than singlehandedly, there is some potential in us as well.

Thus, I am progressively designing a collective intelligence, which experiments with various sources of energy, just to produce those two, relatively lean, climbing trends: more energy per capita and ever growing a percentage of capitae with access to electricity. Which combinations of variables can produce a rationally desired energy efficiency? How is the supply of money changing as we reach different levels of energy efficiency? Can artificial intelligence make energy policies? Empirical check: take a real energy policy and build a neural network which reflects the logical structure of that policy. Then add a method of learning and see, what it produces as hypothetical outcome.

What is the cognitive value of hypotheses made with a neural network? The answer to this question starts with another question: how do hypotheses made with a neural network differ from any other set of hypotheses? The hypothetical states of nature produced by a neural network reflect the outcomes of logically structured learning. The process of learning should represent real social change and real collective intelligence. There are four most important distinctions I have observed so far, in this respect: a) awareness of internal cohesion b) internal competition c) relative resistance to new information and d) perceptual selection (different ways of standardizing input data).

The awareness of internal cohesion, in a neural network, is a function that feeds into the consecutive experimental rounds of learning the information on relative cohesion (Euclidean distance) between variables. We assume that each variable used in the neural network reflects a sequence of collective decisions in the corresponding social structure. Cohesion between variables represents the functional connection between sequences of collective decisions. Awareness of internal cohesion, as a logical attribute of a neural network, corresponds to situations when societies are aware of how mutually coherent their different collective decisions are. The lack of logical feedback on internal cohesion represents situation when societies do not have that internal awareness.

As I metaphorically look around and ask myself, what awareness do I have about important collective decisions in my local society. I can observe and pattern people’s behaviour, for one. Next thing: I can read (very literally) the formalized, official information regarding legal issues. On the top of that, I can study (read, mostly) quantitatively formalized information on measurable attributes of the society, such as GDP per capita, supply of money, or emissions of CO2. Finally, I can have that semi-formalized information from what we call “media”, whatever prefix they come with: mainstream media, social media, rebel media, the-only-true-media etc.

As I look back upon my own life and the changes which I have observed on those four levels of social awareness, the fourth one, namely the media, has been, and still is the biggest game changer. I remember the cultural earthquake in 1990 and later, when, after decades of state-controlled media in the communist Poland, we suddenly had free press and complete freedom of publishing. Man! It was like one of those moments when you step out of a calm, dark alleyway right into the middle of heavy traffic in the street. Information, it just wheezed past.         

There is something about media, both those called ‘mainstream’, and the modern platforms like Twitter or You Tube: they adapt to their audience, and the pace of that adaptation is accelerating. With Twitter, it is obvious: when I log into my account, I can see the Tweets only from people and organizations whom I specifically subscribed to observe. With You Tube, on my starting page, I can see the subscribed channels, for one, and a ton of videos suggested by artificial intelligence on the grounds of what I watched in the past. Still, the mainstream media go down the same avenue. When I go bbc.com, the types of news presented are very largely what the editorial team hopes will max out on clicks per hour, which, in turn, is based on the types of news that totalled the most clicks in the past. The same was true for printed newspapers, 20 years ago: the stuff that got to headlines was the kind of stuff that made sales.

Thus, when I simulate collective intelligence of a society with a neural network, the function allowing the network to observe its own, internal cohesion seems to be akin the presence of media platforms. Actually, I have already observed, many times, that adding this specific function to a multi-layer perceptron (type of neural network) makes that perceptron less cohesive. Looks like a paradox: observing the relative cohesion between its own decisions makes a piece of AI less cohesive. Still, real life confirms that observation. Social media favour the phenomenon known as « echo chamber »: if I want, I can expose myself only to the information that minimizes my cognitive dissonance and cut myself from anything that pumps my adrenaline up. On a large scale, this behavioural pattern produces a galaxy of relatively small groups encapsulated in highly distilled, mutually incoherent worldviews. Have you ever wondered what it would be to use GPS navigation to find your way, in the company of a hardcore flat-Earther?   

When I run my perceptron over samples of data regarding the energy – efficiency of national economies – including the function of feedback on the so-called fitness function is largely equivalent to simulating a society with abundant mediatic activity. The absence of such feedback is, on the other hand, like a society without much of a media sector.

Internal competition, in a neural network, is the deep underlying principle for structuring a multi-layer perceptron into separate layers, and manipulating the number of neurons in each layer. Let’s suppose I have two neural layers in a perceptron: A, and B, in this exact order. If I put three neurons in the layer A, and one neuron in the layer B, the one in B will be able to choose between the 3 signals sent from the layer A. Seen from the A perspective, each neuron in A has to compete against the two others for the attention of the single neuron in B. Choice on one end of a synapse equals competition on the other end.

When I want to introduce choice in a neural network, I need to introduce internal competition as well. If any neuron is to have a choice between processing input A and its rival, input B, there must be at least two distinct neurons – A and B – in a functionally distinct, preceding neural layer. In a collective intelligence, choice requires competition, and there seems to be no way around it.  In a real brain, neurons form synaptic sequences, which means that the great majority of our neurons fire because other neurons have fired beforehand. We very largely think because we think, not because something really happens out there. Neurons in charge of early-stage collection in sensory data compete for the attention of our brain stem, which, in turn, proposes its pre-selected information to the limbic system, and the emotional exultation of the latter incites he cortical areas to think about the whole thing. From there, further cortical activity happens just because other cortical activity has been happening so far.

I propose you a quick self-check: think about what you are thinking right now, and ask yourself, how much of what you are thinking about is really connected to what is happening around you. Are you thinking a lot about the gradient of temperature close to your skin? No, not really? Really? Are you giving a lot of conscious attention to the chemical composition of the surface you are touching right now with your fingertips? Not really a lot of conscious thinking about this one either? Now, how much conscious attention are you devoting to what [fill in the blank] said about [fill in the blank], yesterday? Quite a lot of attention, isn’t it?

The point is that some ideas die out, in us, quickly and sort of silently, whilst others are tough survivors and keep popping up to the surface of our awareness. Why? How does it happen? What if there is some kind of competition between synaptic paths? Thoughts, or components thereof, that win one stage of the competition pass to the next, where they compete again.           

Internal competition requires complexity. There needs to be something to compete for, a next step in the chain of thinking. A neural network with internal competition reflects a collective intelligence with internal hierarchies that offer rewards. Interestingly, there is research showing that greater complexity gives more optimizing accuracy to a neural network, but just as long as we are talking about really low complexity, like 3 layers of neurons instead of two. As complexity is further developed, accuracy decreases noticeably. Complexity is not the best solution for optimization: see Olawoyin and Chen (2018[9]).

Relative resistance to new information corresponds to the way that an intelligent structure deals with cognitive dissonance. In order to have any cognitive dissonance whatsoever, we need at least two pieces of information: one that we have already appropriated as our knowledge, and the new stuff, which could possibly disturb the placid self-satisfaction of the I-already-know-how-things-work. Cognitive dissonance is a potent factor of stress in human beings as individuals, and in whole societies. Galileo would have a few words to say about it. Question: how to represent in a mathematical form the stress connected to cognitive dissonance? My provisional answer is: by division. Cognitive dissonance means that I consider my acquired knowledge as more valuable than new information. If I want to decrease the importance of B in relation to A, I divide B by a factor greater than 1, whilst leaving A as it is. The denominator of new information is supposed to grow over time: I am more resistant to the really new stuff than I am to the already slightly processed information, which was new yesterday. In a more elaborate form, I can use the exponential progression (see The really textbook-textbook exponential growth).

I noticed an interesting property of the neural network I use for studying energy efficiency. When I introduce choice, internal competition and hierarchy between neurons, the perceptron gets sort of wild: it produces increasing error instead of decreasing error, so it basically learns how to swing more between possible states, rather than how to narrow its own trial and error down to one recurrent state. When I add a pinchful of resistance to new information, i.e. when I purposefully create stress in the presence of cognitive dissonance, the perceptron calms down a bit, and can produce a decreasing error.   

Selection of information can occur already at the level of primary perception. I developed on this one in « Thinking Poisson, or ‘WTF are the other folks doing?’ ». Let’s suppose that new science comes as for how to use particular sources of energy. We can imagine two scenarios of reaction to that new science. On the one hand, the society can react in a perfectly flexible way, i.e. each new piece of scientific research gets evaluated as for its real utility for energy management, and gest smoothly included into the existing body of technologies. On the other hand, the same society (well, not quite the same, an alternative one) can sharply distinguish those new pieces of science into ‘useful stuff’ and ‘crap’, with little nuance in between.

What do we know about collective learning and collective intelligence? Three essential traits come to my mind. Firstly, we make social structures, i.e. recurrent combinations of social relations, and those structures tend to be quite stable. We like having stable social structures. We almost instinctively create rituals, rules of conduct, enforceable contracts etc., thus we make stuff that is supposed to make the existing stuff last. An unstable social structure is prone to wars, coups etc. Our collective intelligence values stability. Still, stability is not the same as perfect conservatism: our societies have imperfect recall. This is the second important trait. Over (long periods of) time we collectively shake off, and replace old rules of social games with new rules, and we do it without disturbing the fundamental social structure. In other words: stable as they are, our social structures have mechanisms of adaptation to new conditions, and yet those mechanisms require to forget something about our past. OK, not just forget something: we collectively forget a shitload of something. Thirdly, there had been many local human civilisations, and each of them had eventually collapsed, i.e. their fundamental social structures had disintegrated. The civilisations we have made so far had a limited capacity to learn. Sooner or later, they would bump against a challenge which they were unable to adapt to. The mechanism of collective forgetting and shaking off, in every known historically documented case, had a limited efficiency.

I intuitively guess that simulating collective intelligence with artificial intelligence is likely to be the most fruitful when we simulate various capacities to learn. I think we can model something like a perfectly adaptable collective intelligence, i.e. the one which has no cognitive dissonance and processes information uniformly over time, whilst having a broad range of choice and internal competition. Such a neural network behaves in the opposite way to what we tend to associate with AI: instead of optimizing and narrowing down the margin of error, it creates new alternative states, possibly in a broadening range. This is a collective intelligence with lots of capacity to learn, but little capacity to steady itself as a social structure. From there, I can muzzle the collective intelligence with various types of stabilizing devices, making it progressively more and more structure-making, and less flexible. Down that avenue, the solver-type of artificial intelligence lies, thus a neural network that just solves a problem, with one, temporarily optimal solution.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. You can communicate with me directly, via the mailbox of this blog: goodscience@discoversocialsciences.com. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?


[1] https://data.worldbank.org/indicator/EG.ELC.ACCS.ZS last access May 17th, 2019

[2] https://data.worldbank.org/indicator/EG.ELC.LOSS.ZS?end=2016&start=1990&type=points&view=chart last access May 17th, 2019

[3] https://data.worldbank.org/indicator/EG.ELC.NUCL.ZS?end=2014&start=1960&type=points&view=chart last access May 17th, 2019

[4] https://data.worldbank.org/indicator/EG.ELC.HYRO.ZS?end=2014&start=1960&type=points&view=chart last access May 17th, 2019

[5] https://data.worldbank.org/indicator/EG.ELC.RNWX.ZS?type=points last access May 17th, 2019

[6] https://data.worldbank.org/indicator/EG.FEC.RNEW.ZS?type=points last access May 17th, 2019

[7] https://data.worldbank.org/indicator/EG.ELC.FOSL.ZS?end=2014&start=1960&type=points&view=chart last access May 17th, 2019

[8] https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE?type=points last access May 17th, 2019

[9] Olawoyin, A., & Chen, Y. (2018). Predicting the Future with Artificial Neural Network. Procedia Computer Science, 140, 383-392.

Thinking Poisson, or ‘WTF are the other folks doing?’

My editorial on You Tube

I think I have just put a nice label on all those ideas I have been rummaging in for the last 2 years. The last 4 months, when I have been progressively initiating myself at artificial intelligence, have helped me to put it all in a nice frame. Here is the idea for a book, or rather for THE book, which I have been drafting for some time. « Our artificial intelligence »: this is the general title. The first big chapter, which might very well turn into the first book out of a whole series, will be devoted to energy and technological change. After that, I want to have a go at two other big topics: food and agriculture, then laws and institutions.

I explain. What does it mean « Our artificial intelligence »? As I have been working with an initially simple algorithm of a neural network, and I have been progressively developing it, I understood a few things about the link between what we call, fault of a better word, artificial intelligence, and the way my own brain works. No, not my brain. That would be an overstatement to say that I understand fully my own brain. My mind, this is the right expression. What I call « mind » is an idealized, i.e. linguistic description of what happens in my nervous system. As I have been working with a neural network, I have discovered that artificial intelligence that I make, and use, is a mathematical expression of my mind. I project my way of thinking into a set of mathematical expressions, made into an algorithmic sequence. When I run the sequence, I have the impression of dealing with something clever, yet slightly alien: an artificial intelligence. Still, when I stop staring at the thing, and start thinking about it scientifically (you know: initial observation, assumptions, hypotheses, empirical check, new assumptions and new hypotheses etc.), I become aware that the alien thing in front of me is just a projection of my own way of thinking.

This is important about artificial intelligence: this is our own, human intelligence, just seen from outside and projected into electronics. This particular point is an important piece of theory I want to develop in my book. I want to compile research in neurophysiology, especially in the neurophysiology of meaning, language, and social interactions, in order to give scientific clothes to that idea. When we sometimes ask ourselves whether artificial intelligence can eliminate humans, it boils down to asking: ‘Can human intelligence eliminate humans?’. Well, where I come from, i.e. Central Europe, the answer is certainly ‘yes, it can’. As a matter of fact, when I raise my head and look around, the same answer is true for any part of the world. Human intelligence can eliminate humans, and it can do so because it is human, not because it is ‘artificial’.

When I think about the meaning of the word ‘artificial’, it comes from the Latin ‘artificium’, which, in turn, designates something made with skill and demonstrable craft. Artificium means seasoned skills made into something durable so as to express those skills. Artificial intelligence is a crafty piece of work made with one of the big human inventions: mathematics. Artificial intelligence is mathematics at work. Really at work, i.e. not just as another idealization of reality, but as an actual tool. When I study the working of algorithms in neural networks, I have a vision of an architect in Ancient Greece, where the first mathematics we know seem to be coming from. I have a wall and a roof, and I want them both to hold in balance, so what is the proportion between their respective lengths? I need to learn it by trial and error, as I haven’t any architectural knowledge yet. Although devoid of science, I have common sense, and I make small models of the building I want (have?) to erect, and I test various proportions. Some of those maquettes are more successful than others. I observe, I make my synthesis about the proportions which give the least error, and so I come up with something like the Pythagorean z2 = x2 + y2, something like π = 3,14 etc., or something like the discovery that, for a given angle, the tangent proportion y/x makes always the same number, whatever the empirical lengths of y and x.

This is exactly what artificial intelligence does. It makes small models of itself, tests the error resulting from comparison between those models and something real, and generalizes the observation of those errors. Really: this is what a face recognition piece of software does at an airport, or what Google Ads does. This is human intelligence, just unloaded into a mathematical vessel. This is the first discovery that I have made about AI. Artificial intelligence is actually our own intelligence. Studying the way AI behaves allows seeing, like under a microscope, the workings of human intelligence.

The second discovery is that when I put a neural network to work with empirical data of social sciences, it produces strange, intriguing patterns, something like neighbourhoods of the actual reality. In my root field of research – namely economics – there is a basic concept that we, economists, use a lot and still wonder what it actually means: equilibrium. It is an old observation that networks of exchange in human societies tend to find balance in some precise proportions, for example proportions between demand, supply, price and quantity, or those between labour and capital.

Half of economic sciences is about explaining the equilibriums we can empirically observe. The other half employs itself at discarding what that first half comes up with. Economic equilibriums are something we know that exists, and constantly try to understand its mechanics, but those states of society remain obscure to a large extent. What we know is that networks of exchange are like machines: some designs just work, some others just don’t. One of the most important arguments in economic sciences is whether a given society can find many alternative equilibriums, i.e. whether it can use optimally its resources at many alternative proportions between economic variables, or, conversely, is there just one point of balance in a given place and time. From there on, it is a rabbit hole. What does it mean ‘using our resources optimally’? Is it when we have the lowest unemployment, or when we have just some healthy amount of unemployment? Theories are welcome.

When trying to make predictions about the future, using the apparatus of what can now be called classical statistics, social sciences always face the same dilemma: rigor vs cognitive depth. The most interesting correlations are usually somehow wobbly, and mathematical functions we derive from regression always leave a lot of residual errors.    

This is when AI can step in. Neural networks can be used as tools for optimization in digital systems. Still, they have another useful property: observing a neural network at work allows having an insight into how intelligent structures optimize. If I want to understand how economic equilibriums take shape, I can observe a piece of AI producing many alternative combinations of the relevant variables. Here comes my third fundamental discovery about neural networks: with a few, otherwise quite simple assumptions built into the algorithm, AI can produce very different mechanisms of learning, and, consequently, a broad range of those weird, yet intellectually appealing, alternative states of reality. Here is an example: when I make a neural network observe its own numerical properties, such as its own kernel or its own fitness function, its way of learning changes dramatically. Sounds familiar? When you make a human being performing tasks, and you allow them to see the MRI of their own brain when performing those tasks, the actual performance changes.

When I want to talk about applying artificial intelligence, it is a good thing to return to the sources of my own experience with AI, and explain it works. Some sequences of mathematical equations, when run recurrently many times, behave like intelligent entities: they experiment, they make errors, and after many repeated attempts they come up with a logical structure that minimizes the error. I am looking for a good, simple example from real life; a situation which I experienced personally, and which forced me to learn something new. Recently, I went to Marrakech, Morocco, and I had the kind of experience that most European first-timers have there: the Jemaa El Fna market place, its surrounding souks, and its merchants. The experience consists in finding your way out of the maze-like structure of the alleys adjacent to the Jemaa El Fna. You walk down an alley, you turn into another one, then into still another one, and what you notice only after quite a few such turns is that the whole architectural structure doesn’t follow AT ALL the European concept of urban geometry.  

Thus, you face the length of an alley. You notice five lateral openings and you see a range of lateral passages. In a European town, most of those lateral passages would lead somewhere. A dead end is an exception, and passages between buildings are passages in the strict sense of the term: from one open space to another open space. At Jemaa El Fna, its different: most of the lateral ways lead into deep, dead-end niches, with more shops and stalls inside, yet some other open up into other alleys, possibly leading to the main square, or at least to a main street.

You pin down a goal: get back to the main square in less than… what? One full day? Just kidding. Let’s peg that goal down at 15 minutes. Fault of having a good-quality drone, equipped with thermovision, flying over the whole structure of the souk, and guiding you, you need to experiment. You need to test various routes out of the maze and to trace those, which allow the x ≤ 15 minutes time. If all the possible routes allowed you to get out to the main square in exactly 15 minutes, experimenting would be useless. There is any point in experimenting only if some from among the possible routes yield a suboptimal outcome. You are facing a paradox: in order not to make (too much) errors in your future strolls across Jemaa El Fna, you need to make some errors when you learn how to stroll through.

Now, imagine a fancy app in your smartphone, simulating the possible errors you can make when trying to find your way through the souk. You could watch an imaginary you, on the screen, wandering through the maze of alleys and dead-ends, learning by trial and error to drive the time of passage down to no more than 15 minutes. That would be interesting, wouldn’t it? You could see your possible errors from outside, and you could study the way you can possibly learn from them. Of course, you could always say: ‘it is not the real me, it is just a digital representation of what I could possibly do’. True. Still, I can guarantee you: whatever you say, whatever strong the grip you would try to keep on the actual, here-and-now you, you just couldn’t help being fascinated.

Is there anything more, beyond fascination, in observing ourselves making many possible future mistakes? Let’s think for a moment. I can see, somehow from outside, how a copy of me deals with the things of life. Question: how does the fact of seeing a copy of me trying to find a way through the souk differ from just watching a digital map of said souk, with GPS, such as Google Maps? I tried the latter, and I have two observations. Firstly, in some structures, such as that of maze-like alleys adjacent to Jemaa El Fna, seeing my own position on Google Maps is of very little help. I cannot put my finger on the exact reason, but my impression is that when the environment becomes just too bizarre for my cognitive capacities, having a bird’s eye view of it is virtually no good. Secondly, when I use Google Maps with GPS, I learn very little about my route. I just follow directions on the screen, and ultimately, I get out into the main square, but I know that I couldn’t reproduce that route without the device. Apparently, there is no way around learning stuff by myself: if I really want to learn how to move through the souk, I need to mess around with different possible routes. A device that allows me to see how exactly I can mess around looks like having some potential.

Question: how do I know that what I see, in that imaginary app, is a functional copy of me, and how can I assess the accuracy of that copy? This is, very largely, the rabbit hole I have been diving into for the last 5 months or so. The first path to follow is to look at the variables used. Artificial intelligence works with numerical data, i.e. with local instances of abstract variables. Similarity between the real me, and the me reproduced as artificial intelligence is to find in the variables used. In real life, variables are the kinds of things, which: a) are correlated with my actions, both as outcomes and as determinants b) I care about, and yet I am not bound to be conscious of caring about.

Here comes another discovery I made on my journey through the realm of artificial intelligence: even if, in the simplest possible case, I just make the equations of my neural network so as they represent what I think is the way I think, and I drop some completely random values of the relevant variables into the first round of experimentation, the neural network produces something disquietingly logical and coherent. In other words, if I am even moderately honest in describing, in the form of equations, my way of apprehending reality, the AI I thus created really processes information in the way I would.  

Another way of assessing the similarity between a piece of AI and myself is to compare the empirical data we use: I can make a neural network think more or less like me if I feed it with an accurate description of my so-far experience. In this respect, I discovered something that looks like a keystone in my intellectual structure: as I feed my neural network with more and more empirical data, the scope of the possible ways to learning something meaningful narrows down. When I minimise the amount of empirical data fed into the network, the latter can produce interesting, meaningful results via many alternative sequences of equations. As the volume of real-life information swells, some sequences of equations just naturally drop off the game: they drive the neural network into a state of structural error, when it stops performing calculations.

At this point, I can see some similarity between AI and quantum physics. Quantum mechanics have grown as a methodology, as they proved to be exceptionally accurate in predicting the outcomes of experiments in physics. That accuracy was based on the capacity to formulate very precise hypotheses regarding empirical reality, and the capacity to increase the precision of those hypotheses through the addition of empirical data from past experiments.  

Those fundamental observations I made about the workings of artificial intelligence have progressively brought me to use AI in social sciences. An analytical tool has become a topic of research for me. Happens all the time in science, mind you. Geometry, way back in the day, was a thoroughly practical set of tools, which served to make good boats, ships and buildings. With time, geometry has become a branch of science on its own rights. In my case, it is artificial intelligence. It is a tool, essentially, invented back in the 1960ies and 1970ies, and developed over the last 20 years, and it serves practical purposes: facial identification, financial investment etc. Still, as I have been working with a very simple neural network for the last 4 months, and as I have been developing the logical structure of that network, I am discovering a completely new opening in my research in social sciences.

I am mildly obsessed with the topic of collective human intelligence. I have that deeply rooted intuition that collective human behaviour is always functional regarding some purpose. I perceive social structures such as financial markets or political institutions as something akin to endocrine systems in a body: complex set of signals with a random component in their distribution, and yet a very coherent outcome. I follow up on that intuition by assuming that we, humans, are most fundamentally, collectively intelligent regarding our food and energy base. We shape our social structures according to the quantity and quality of available food and non-edible energy. For quite a while, I was struggling with the methodological issue of precise hypothesis-making. What states of human society can be posited as coherent hypotheses, possible to check or, fault of checking, to speculate about in an informed way?

The neural network I am experimenting with does precisely this: it produces strange, puzzling, complex states, defined by the quantitative variables I use. As I am working with that network, I have come to redefining the concept of artificial intelligence. A movie-based approach to AI is that it is fundamentally non-human. As I think about it sort of step by step, AI is human, as it has been developed on the grounds of human logic. It is human meaning, and therefore an expression of human neural wiring. It is just selective in its scope. Natural human intelligence has no other way of comprehending but comprehending IT ALL, i.e. the whole of perceived existence. Artificial intelligence is limited in scope: it works just with the data we assign it to work with. AI can really afford not to give a f**k about something otherwise important. AI is focused in the strict sense of the term.

During that recent stay in Marrakech, Morocco, I had been observing people around me and their ways of doing things. As it is my habit, I am patterning human behaviour. I am connecting the dots about the ways of using energy (for the moment I haven’t seen any making of energy, yet) and food. I am patterning the urban structure around me and the way people live in it.

Superbly kept gardens and buildings marked by a sense of instability. Human generosity combined with somehow erratic behaviour in the same humans. Of course, women are fully dressed, from head to toes, but surprisingly enough, men too. With close to 30 degrees Celsius outside, most local dudes are dressed like a Polish guy would dress by 10 degrees Celsius. They dress for the heat as I would dress for noticeable cold. Exquisitely fresh and firm fruit and vegetables are a surprise. After having visited Croatia, on the Southern coast of Europe, I would rather expect those tomatoes to be soft and somehow past due. Still, they are excellent. Loads of sugar in very nearly everything. Meat is scarce and tough. All that has been already described and explained by many a researcher, wannabe researchers included. I think about those things around me as about local instances of a complex logical structure: a collective intelligence able to experiment with itself. I wonder what other, hypothetical forms could this collective intelligence take, close to the actually observable reality, as well as some distance from it.

The idea I can see burgeoning in my mind is that I can understand better the actual reality around me if I use some analytical tool to represent slight hypothetical variations in said reality. Human behaviour first. What exactly makes me perceive Moroccans as erratic in their behaviour, and how can I represent it in the form of artificial intelligence? Subjectively perceived erraticism is a perceived dissonance between sequences. I expect a certain sequence to happen in other people’s behaviour. The sequence that really happens is different, and possibly more differentiated than what I expect to happen. When I perceive the behaviour of Moroccans as erratic, does it connect functionally with their ways of making and using food and energy?  

A behavioural sequence is marked by a certain order of actions, and a timing. In a given situation, humans can pick their behaviour from a total basket of Z = {a1, a2, …, az} possible actions. These, in turn, can combine into zPk = z!/(z – k)! = (1*2*…*z) / [1*2*…*(z – k)] possible permutations of k component actions. Each such permutation happens with a certain frequency. The way a human society works can be described as a set of frequencies in the happening of those zPk permutations. Well, that’s exactly what a neural network such as mine can do. It operates with values standardized between 0 and 1, and these can be very easily interpreted as frequencies of happening. I have a variable named ‘energy consumption per capita’. When I use it in the neural network, I routinely standardize each empirical value over the maximum of this variable in the entire empirical dataset. Still, standardization can convey a bit more of a mathematical twist and can be seen as the density of probability under the curve of a statistical distribution.

When I feel like giving such a twist, I can make my neural network stroll down different avenues of intelligence. I can assume that all kinds of things happen, and all those things are sort of densely packed one next to the other, and some of those things are sort of more expected than others, and thus I can standardize my variables under the curve of the normal distribution. Alternatively, I can see each empirical instance of each variable in my database as a rare event in an interval of time, and then I standardize under the curve of the Poisson distribution. A quick check with the database I am using right now brings an important observation: the same empirical data standardized with a Poisson distribution becomes much more disparate as compared to the same data standardized with the normal distribution. When I use Poisson, I lead my empirical network to divide sharply empirical data into important stuff on the one hand, and all the rest, not even worth to bother about, on the other hand.

I am giving an example. Here comes energy consumption per capita in Ecuador (1992) = 629,221 kg of oil equivalent (koe), Slovak Republic (2000) = 3 292,609 koe, and Portugal (2003) = 2 400,766 koe. These are three different states of human society, characterized by a certain level of energy consumption per person per year. They are different. I can choose between three different ways of making sense out of their disparity. I can see them quite simply as ordinals on a scale of magnitude, i.e. I can standardize them as fractions of the greatest energy consumption in the whole sample. When I do so, they become: Ecuador (1992) =  0,066733839, Slovak Republic (2000) =  0,349207223, and Portugal (2003) =  0,254620211.

In an alternative worldview, I can perceive those three different situations as neighbourhoods of an expected average energy consumption, in the presence of an average, standard deviation from that expected value. In other words, I assume that it is normal that countries differ in their energy consumption per capita, as well as it is normal that years of observation differ in that respect. I am thinking normal distribution, and then my three situations come as: Ecuador (1992) = 0,118803134, Slovak Republic (2000) = 0,556341893, and Portugal (2003) = 0,381628627.

I can adopt an even more convoluted approach. I can assume that energy consumption in each given country is the outcome of a unique, hardly reproducible process of local adjustment. Each country, with its energy consumption per capita, is a rare event. Seen from this angle, my three empirical states of energy consumed per capita could occur with the probability of the Poisson distribution, estimated with the whole sample of data. With this specific take on the thing, my three empirical values become: Ecuador (1992) = 0, Slovak Republic (2000) = 0,999999851, and Portugal (2003) = 9,4384E-31.

I come back to Morocco. I perceive some behaviours in Moroccans as erratic. I think I tend to think Poisson distribution. I expect some very tightly defined, rare event of behaviour, and when I see none around, I discard everything else as completely not fitting the bill. As I think about it, I guess most of our human intelligence is Poisson-based. We think ‘good vs bad’, ‘edible vs not food’, ‘friend vs foe’ etc.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Combinatorial meaning and the cactus

My editorial on You Tube

I am back into blogging, after over two months of pausing. This winter semester I am going, probably, for record workload in terms of classes: 630 hours in total. October and November look like an immersion time, when I had to get into gear for that amount of teaching. I noticed one thing that I haven’t exactly been aware of, so far, or maybe not as distinctly as I am now: when I teach, I love freestyling about the topic at hand. Whatever hand of nice slides I prepare for a given class, you can bet on me going off the beaten tracks and into the wilderness of intellectual quest, like by the mid-class. I mean, I have nothing against Power Point, but at some point it becomes just so limiting… I remember that conference, one year ago, when the projector went dead during my panel (i.e. during the panel when I was supposed to present my research). I remember that mixed, and shared feeling of relief and enjoyment in people present in the room: ‘Good. Finally, no slides. We can like really talk science’.

See? Once again, I am going off track, and that in just one paragraph of writing. You can see what I mean when I refer to me going off track in class. Anyway, I discovered one more thing about myself: freestyling and sailing uncharted intellectual waters has a cost, and this is a very clear and tangible biological cost. After a full day of teaching this way I feel as if my brain was telling me: ‘Look, bro. I know you would like to write a little, but sorry: no way. Them synapses are just tired. You need to give me a break’.

There is a third thing I have discovered about myself: that intense experience of teaching makes me think a lot. I cannot exactly put all this in writing on the spot, fault of fresh neurotransmitter available, still all that thinking tends to crystallize over time and with some patience I can access it later. Later means now, as it seems. I feel that I have crystallized enough and I can start to pull it out into the daylight. The « it » consists, mostly, in a continuous reflection on collective intelligence. How are we (possibly) smart together?

As I have been thinking about it, three events combined and triggered in me a string of more specific questions. I watched another podcast featuring Jordan Peterson, whom I am a big fan of, and who raised the topic of the neurobiological context of meaning. How our brain makes meaning, and how does it find meaning in sensory experience? On the other hand, I have just finished writing the manuscript of an article on the energy-efficiency of national economies, which I have submitted to the ‘Energy Economics’ journal, and which, almost inevitably, made me work with numbers and statistics. As I had been doing that empirical research, I found out something surprising: the most meaningful econometric results came to the surface when I transformed my original data into local coefficients of an exponential progression that hypothetically started in 1989. Long story short, these coefficients are essentially growth rates, which behave in a peculiar way, due to their arithmetical structure: they decrease very quickly over time, whatever is the source, raw empirical observation, as if they were representing weakening shock waves sent by an explosion in 1989.

Different types of transformed data, the same data, in that research of mine, produced different statistical meanings. I am still coining up real understanding of what it exactly means, by the way. As I was putting that together with Jordan Peterson’s thoughts on meaning as a biological process, I asked myself: what is the exact meaning of the fact that we, as scientific community, assign meaning to statistics? How is it connected with collective intelligence?

I think I need to start more or less where Jordan Peterson moves, and ask ‘What is meaning?’. No, not quite. The ontological type, I mean the ‘What?’ type of question, is a mean beast. Something like a hydra: you cut the head, namely you explain the thing, you think that Bob’s your uncle, and a new head pops up, like out of nowhere, and it bites you, where you know. The ‘How?’ question is a bit more amenable. This one is like one of those husky dogs. Yes, it is semi wild, and yes, it can bite you, but once you tame it, and teach it to pull that sleigh, it will just pull. So I ask ‘How is meaning?’. How does meaning occur?

There is a particular type of being smart together, which I have been specifically interested in, for like the last two months. It is the game-based way of being collectively intelligent. The theory of games is a well-established basis for studying human behaviour, including that of whole social structures. As I was thinking about it, there is a deep reason for that. Social interactions are, well, interactions. It means that I do something and you do something, and those two somethings are supposed to make sense together. They really do at one condition: my something needs to be somehow conditioned by how your something unfolds, and vice versa. When I do something, I come to a point when it becomes important for me to see your reaction to what I do, and only when I will have seen it, I will further develop on my action.

Hence, I can study collective action (and interaction) as a sequence of moves in a game. I make my move, and I stop moving, for a moment, in order to see your move. You make yours, and it triggers a new move in me, and so the story goes further on in time. We can experience it very vividly in negotiations. With any experience in having serious talks with other people, thus when we negotiate something, we know that it is pretty counter-efficient to keep pushing our point in an unbroken stream of speech. It is much more functional to pace our strategy into separate strings of argumentation, and between them, we wait for what the other person says. I have already given a first theoretical go at the thing in « Couldn’t they have predicted that? ».

This type of social interaction, when we pace our actions into game-like moves, is a way of being smart together. We can come up with new solutions, or with the understanding of new problems – or a new understanding of old problems, as a matter of fact – and we can do it starting from positions of imperfect agreement and imperfect coordination. We try to make (apparently) divergent points, or we pursue (apparently) divergent goals, and still, if we accept to wait for each other’s reaction, we can coordinate and/or agree about those divergences, so as to actually figure out, and do, some useful s**t together.

What connection with the results of my quantitative research? Let’s imagine that we play a social game, and each of us makes their move, and then they wait for the moves of other players. The state of the game at any given moment can be represented as the outcome of past moves. The state of reality is like a brick wall, made of bricks laid one by one, and the state of that brick wall is the outcome of the past laying of bricks.  In the general theory of science, it is called hysteresis. There is a mathematical function, reputed to represent that thing quite nicely: the exponential progression. On a timeline, I define equal intervals. To each period of time, I assign a value y(t) = et*a, where ‘t’ is the ordinal of the time period, ‘e’ is a mathematical constant, the base of natural logarithm, e = 2,7188, and ‘a’ is what we call the exponential coefficient.

There is something else to that y = et*a story. If we think like in terms of a broader picture, and assume that time is essentially what we imagine it is, the ‘t’ part can be replaced by any number we imagine. Then, the Euler’s formula steps in: ei*x = cos x + i*sin x. If you paid attention in math classes, at high school, you might remember that sine and cosine, the two trigonometric functions, have a peculiar property. As they refer to angles, at the end of the day they refer to a full circle of 360°. It means they go in a circle, thus in a cycle, only they go in perfectly negative a correlation: when the sine goes one unit one way, the cosine goes one unit exactly the other way round etc. We can think about each occurrence we experience – the ‘x’ –  as a nexus of two, mutually opposing cycles, and they can be represented as, respectively, the sine, and the cosine of that occurrence ‘x’. When I grow in height (well, when I used to), my current height can be represented as the nexus of natural growth (sine), and natural depletion with age (cosine), that sort of things.

Now, let’s suppose that we, as a society, play two different games about energy. One game makes us more energy efficient, ‘cause we know we should (see Settlement by energy – can renewable energies sustain our civilisation?). The other game makes us max out on our intake of energy from the environment (see Technological Change as Intelligent, Energy-Maximizing Adaptation). At any given point in time, the incremental change in our energy efficiency is the local equilibrium between those two games. Thus, if I take the natural logarithm of our energy efficiency at a given point in space-time, thus the coefficient of GDP per kg of oil equivalent in energy consumed, that natural logarithm is the outcome of those two games, or, from a slightly different point of view, it descends from the number of consecutive moves made (the ordinal of time period we are currently in), and from a local coefficient – the equivalent of ‘i’ in the Euler’s formula – which represents the pace of building up the outcomes of past moves in the game.

I go back to that ‘meaning’ thing. The consecutive steps ‘t’ in an exponential progression y(t) = et*a progression correspond to successive rounds of moves in the games we play. There is a core structure to observe: the length of what I call ‘one move’, and which means a sequence of actions that each person involved in the interaction carries out without pausing and waiting for the reaction observable in other people in the game. When I say ‘length’, it involves a unit of measurement, and here, I am quite open. It can be a length of time, or the number of distinct actions in my sequence. The length of one move in the game determines the pace of the game, and this, in turn, sets the timeframe for the whole game to produce useful results: solutions, understandings, coordinated action etc.

Now, where the hell is any place for ‘meaning’ in all that game stuff? My view is the following: in social games, we sequence our actions into consecutive moves, with some waiting-for-reaction time in between, because we ascribe meaning to those sub-sequences that we define as ‘one move’. The way we process meaning matters for the way we play social games.

I am a scientist (well, I hope), and for me, meaning occurs very largely as I read what other people have figured out. So I stroll down the discursive avenue named ‘neurobiology of meaning’, welcomingly lit by with the lampposts of Science Direct. I am calling by an article by Lee M. Pierson, and Monroe Trout, entitled ‘What is consciousness for?[1]. The authors formulate a general hypothesis, unfortunately not supported (yet?) with direct empirical check, that consciousness had been occurring, back in the day, I mean like really back in the day, as cognitive support of volitional movement, and evolved, since then, into more elaborate applications. Volitional movement is non-automatic, i.e. decisions have to be made in order for the movement to have any point. It requires quick assemblage of data on the current situation, and consciousness, i.e. the awareness of many abstract categories in the same time, could the solution.

According to that approach, meaning occurs as a process of classification in the neurologically stored data that we need to use virtually simultaneously in order to do something as fundamental as reaching for another can of beer. Classification of data means grouping into sets. You have a random collection of data from sensory experience, like a homogenous cloud of information. You know, the kind you experience after a particularly eventful party. Some stronger experiences stick out: the touch of cold water on your naked skin, someone’s phone number written on your forearm with a lipstick etc. A question emerges: should you call this number? It might be your new girlfriend (i.e. the girlfriend whom you don’t consciously remember as your new one but whom you’d better to if you don’t want your car splashed with acid), or it might be a drug dealer whom you’d better not call back.  You need to group the remaining data in functional sets so as to take the right action.

So you group, and the challenge is to make the right grouping. You need to collect the not-quite-clear-in-their-meaning pieces of information (Whose lipstick had that phone number been written with? Can I associate a face with the lipstick? For sure, the right face?). One grouping of data can lead you to a happy life, another one can lead you into deep s**t. It could be handy to sort of quickly test many alternative groupings as for their elementary coherence, i.e. hold all that data in front of you, for a moment, and contemplate flexibly many possible connections. Volitional movement is very much about that. You want to run? Good. It would be nice not to run into something that could hurt you, so it would be good to cover a set of sensory data, combining something present (what we see), with something we remember from the past (that thing on the 2 o’clock azimuth stings like hell), and sort of quickly turn and return all that information so as to steer clear from that cactus, as we run.

Thus, as I follow the path set by Pierson and Trout, meaning occurs as the grouping of data in functional categories, and it occurs when we need to do it quickly and sort of under pressure of getting into trouble. I am going onto the level of collective intelligence in human social structures. In those structures, meaning, i.e. the emergence of meaningful distinctions communicable between human beings and possible to formalize in language, would occur as said structures need to figure something out quickly and under uncertainty, and meaning would allow putting together the types of information that are normally compartmentalized and fragmented.

From that perspective, one meaningful move in a game encompasses small pieces of action which we intuitively guess we should immediately group together. Meaningful moves in social games are sequences of actions, which we feel like putting immediately back to back, without pausing and letting the other player do their thing. There is some sort of pressing immediacy in that grouping. We guess we just need to carry out those actions smoothly one after the other, in an unbroken sequence. Wedging an interval of waiting time in between those actions could put our whole strategy at peril, or we just think so.

When I apply this logic to energy efficiency, I think about business strategies regarding innovation in products and technologies. When we launch a new product, or implement a new technology, there is something like fixed patterns to follow. When you start beta testing a new mobile app, for example, you don’t stop in the middle of testing. You carry out the tests up to their planned schedule. When you start launching a new product (reminder: more products made on the same energy base mean greater energy efficiency), you keep launching until you reach some sort of conclusive outcome, like unequivocal success or failure. Social games we play around energy efficiency could very well be paced by this sort of business-strategy-based moves.

I pick up another article, that by Friedemann Pulvermüller (2013[2]). The main thing I see right from the beginning is that apparently, neurology is progressively dropping the idea of one, clearly localised area in our brain, in charge of semantics, i.e. of associating abstract signs with sensory data. What we are discovering is that semantics engage many areas in our brain into mutual connection. You can find developments on that issue in: Patterson et al. 2007[3], Bookheimer 2002[4], Price 2000[5], and Binder & Desai 2011[6]. As we use words, thus as we pronounce, hear, write or read them, that linguistic process directly engages (i.e. is directly correlated with the activation of) sensory and motor areas of our brain. That engagement follows multiple, yet recurrent patterns. In other words, instead of having one mechanism in charge of meaning, we are handling different ones.

After reviewing a large bundle of research, Pulvermüller proposes four different patterns: referential, combinatorial, emotional-affective, and abstract semantics. Each time, the semantic pattern consists in one particular area of the brain acting as a boss who wants to be debriefed about something from many sources, and starts pulling together many synaptic strings connected to many places in the brain. Five different pieces of cortex come recurrently as those boss-hubs, hungry for differentiated data, as we process words. They are: inferior frontal cortex (iFC, so far most commonly associated with the linguistic function), superior temporal cortex (sTC), inferior parietal cortex (iPC), inferior and middle temporal cortex (m/iTC), and finally the anterior temporal cortex (aTC). The inferior frontal cortex (iFC) seems to engage in the processing of words related to action (walk, do etc.). The superior temporal cortex (sTC) looks like seriously involved when words related to sounds are being used. The inferior parietal cortex (iPC) activates as words connect to space, and spatio-temporal constructs. The inferior and middle temporal cortex (m/iTC) lights up when we process words connected to animals, tools, persons, colours, shapes, and emotions. That activation is category specific, i.e. inside m/iTC, different Christmas trees start blinking as different categories among those are being named and referred to semantically. The anterior temporal cortex (aTC), interestingly, has not been associated yet with any specific type of semantic connections, and still, when it is damaged, semantic processing in our brain is generally impaired.

All those areas of the brain have other functions, besides that semantic one, and generally speaking, the kind of meaning they process is correlated with the kind of other things they do. The interesting insight, at this point, is the polyvalence of cortical areas that we call ‘temporal’, thus involved in the perception of time. Physicists insist very strongly that time is largely a semantic construct of ours, i.e. time is what we think there is rather than what really is, out there. In physics, what exists is rather sequential a structure of reality (things happen in an order) than what we call time. That review of literature by Pulvermüller indirectly indicates that time is a piece of meaning that we attach to sounds, colours, emotions, animal and people. Sounds come as logical: they are sequences of acoustic waves. On the other hand, how is our perception of colours, or people, connected to our concept of time? This is a good one to ask, and a tough one to answer. What I would look for is recurrence. We identify persons as distinct ones as we interact with them recurrently. Autistic people have frequently that problem: when you put on a different jacket, they have hard time accepting you are the same person. Identification of animals or emotions could follow the same logic.

The article discusses another interesting issue: the more abstract the meaning is, the more different regions of the brain it engages. The really abstract ones, like ‘beauty’ or ‘freedom’, are super Christmas-trees: they provoke involvement all over the place. When we do abstraction, in our mind, for example when writing poetry (OK, just good poetry), we engage a substantial part of our brain. This is why we can be lost in our thoughts: those thoughts, when really abstract, are really energy-consuming, and they might require to shut down some other functions.

My personal understanding of the research reviewed by Pulvermüller is that at the neurological level, we process three essential types of meaning. One consists in finding our bearings in reality, thus in identifying things and people around, and in assigning emotions to them. It is something like a mapping function. Then, we need to do things, i.e. to take action, and that seems to be a different semantic function. Finally, we abstract, thus we connect distant parcels of data into something that has no direct counterpart neither in the mapped reality, nor in our actions.

I have an indirect insight, too. We have a neural wiring, right? We generate meaning with that wiring, right? Now, how is adaptation occurring, in that scheme, over time? Do we just adapt the meaning we make to the neural hardware we have, or is there a reciprocal kick, I mean from meaning to wiring? So far, neurological research has demonstrated that physical alteration in specific regions of the brain impacts semantic functions. Can it work the other way round, i.e. can recurrent change in semantics being processed alter the hardware we have between our ears? For example, as we process a lot of abstract concepts, like ‘taxes’ or ‘interest rate’, can our brains adapt from generation to generation, so as to minimize the gradient of energy expenditure as we shift between levels of abstraction? If we could, we would become more intelligent, i.e. able to handle larger and more differentiated sets of data in a shorter time.

How does all of this translate into collective intelligence? Firstly, there seem to be layers of such intelligence. We can be collectively smart sort of locally – and then we handle those more basic things, like group identity or networks of exchange – and then we can (possibly) become collectively smarter at more combinatorial a level, handling more abstract issues, like multilateral peace treaties or climate change. Moreover, the gradient of energy consumed, between the collective understanding of simple and basic things, on the one hand, and the overarching abstract issues, is a good predictor regarding the capacity of the given society to survive and thrive.

Once again, I am trying to associate this research in neurophysiology with my game-theoretical approach to energy markets. First of all, I recall the three theories of games, co-awarded the economic Nobel prize in 1994, namely those by: John Nash, John (Yan) Harsanyi, and Reinhard Selten. I start with the latter. Reinhard Selten claimed, and seems to have proven, that social games have a memory, and the presence of such memory is needed in order for us to be able to learn collectively through social games. You know those situations of tough talks, when the other person (or you) keeps bringing forth the same argumentation over and over again? This is an example of game without much memory, i.e. without much learning. In such a game we repeat the same move, like fish banging its head against the glass wall of an aquarium. Playing without memory is possible in just some games, e.g. tennis, or poker, if the opponent is not too tough. In other games, like chess, repeating the same move is not really possible. Such games force learning upon us.

Active use of memory requires combinatorial meaning. We need to know what is meaningful, in order to remember it as meaningful, and thus to consider it as valuable data for learning. The more combinatorial meaning is, inside a supposedly intelligent structure, such as our brain, the more energy-consuming that meaning is. Games played with memory and active learning could be more energy-consuming for our collective intelligence than games played without. Maybe that whole thing of electronics and digital technologies, so hungry of energy, is a way that we, collective human intelligence, put in place in order to learn more efficiently through our social games?

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

[1] Pierson, L. M., & Trout, M. (2017). What is consciousness for?. New Ideas in Psychology, 47, 62-71.

[2] Pulvermüller, F. (2013). How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics. Trends in cognitive sciences, 17(9), 458-470.

[3] Patterson, K. et al. (2007) Where do you know what you know? The representation of semantic knowledge in the human brain. Nat. Rev. Neurosci. 8, 976–987

[4] Bookheimer,S.(2002) FunctionalMRIoflanguage:newapproachesto understanding the cortical organization of semantic processing. Annu. Rev. Neurosci. 25, 151–188

[5] Price, C.J. (2000) The anatomy of language: contributions from functional neuroimaging. J. Anat. 197, 335–359

[6] Binder, J.R. and Desai, R.H. (2011) The neurobiology of semantic memory. Trends Cogn. Sci. 15, 527–536

The other cheek of business

My editorial

I am turning towards my educational project. I want to create a step-by-step teaching method, where I guide a student in their learning of social sciences, and this learning is by doing research in social sciences. I have a choice between imposing some predefined topics for research, or invite each student to propose their own. The latter seems certainly more exciting. As a teacher, I know what a brain storm is, and believe: a dozen dedicated and bright individuals, giving their ideas about what they think it is important to do research about, can completely uproot your (my own?) ideas as what it is important to do research about. Still, I can hardly imagine me, individually, handling efficiently all that bloody blissful diversity of ideas. Thus, the first option, namely imposing some predefined topics for research, seems just workable, whilst still being interesting. People can get creative about methods of research, after all, not just about topics for it. Most of the great scientific inventions was actually methodology, and what was really breakthrough about it consisted in the universal applicability of those newly invented methods.

Thus, what I want to put together is a step-by-step path of research, communicable and teachable, regarding my own topics for research. Whilst I still admit the possibility of student-generated topics coming my way, I will consider them as a luxurious delicacy I can indulge in under the condition I can cope with those main topics. Anyway, my research topics for 2018 are:

  1. Smart cities, their emergence, development, and the practical ways of actually doing business there
  2. Fintech, and mostly cryptocurrencies, and even more mostly those hybrid structures, where cryptocurrencies are combined with the “traditional” financial assets
  • Renewable energies
  1. Social and technological change as a manifestation of collective intelligence

Intuitively, I can wrap (I), (II), and (III) into a fancy parcel, decorated with (IV). The first three items actually coincide in time and space. The fourth one is that kind of decorative cherry you can put on a cake to make it look really scientific.

As I start doing research about anything, hypotheses come handy. If you investigate a criminal case, assuming that anyone could have done anything anyhow gives you certainly the biggest possible picture, but the picture is blurred. Contours fade and dance in front on your eyes, idiocies pop up, and it is really hard to stay reasonable. On the other hand, if you make some hypotheses as for who did what and how, your investigation gathers both speed and sense. This is what I strongly advocate for: make some hypotheses at the starting point of your research. Before I go further with hypothesising on my topics for research, a few preliminary remarks can be useful. First of all, we always hypothesise about anything we experience and think. Yes, I am claiming this very strongly: anything we think is a hypothesis or contains a hypothesis. How come? Well, we always generalise, i.e. we simplify and hope the simplification will hold. We very nearly always have less data than we actually need to make the judgments we make with absolute certainty. Actually, everything we pretend to claim with certainty is an approximation.

Thus, we hypothesise intuitively, all the time. Now, I summon the spirit of Milton Friedman from the abyss of pre-Facebook history, and he reminds us the four basic levels of hypothesising. Level one: regarding any given state of nature, we can formulate an indefinitely great number of hypotheses. In practice, there is infinitely many of them. Level two: just some of those infinitely many hypotheses are checkable at all, with the actual access to data I have. Level three: among all the checkable hypotheses, with the data at hand, there are just some, regarding which I can say with reasonable certainty whether they are true or false. Level four: it is much easier to falsify a hypothesis, i.e. to say under what conditions it does not hold at all, than to verify it, i.e. claiming under what conditions it is true. This comes from level one: each hypothesis has cousins, who sound almost exactly the same, but just almost, so under given conditions many mutually non-exclusive hypotheses can be true.

Now, some of you could legitimately ask ‘Good, so I need to start with formulating infinitely many hypotheses, then check which of them are checkable, then identify those allowing more or less rigorous scientific proof? Great. It means that at the very start I get entangled for eternity into checking how checkable is each of the infinitely many hypotheses I can think of. Not very promising as for results’. This is legit to say that, and this is the reason why, in science, we use that tool known as the Ockham’s razor. It serves to give a cognitive shave to badly kept realities. In its traditional form it consists in assuming that the most obvious answer is usually the correct one. Still, as you have a closer look at this precise phrasing, you can see a lot of hidden assumptions. It assumes you can distinguish the obvious from the dubious, which, in turn, means that you have already applied the razor beforehand. Bit of a loop. The practical way of wielding that razor is to assume that the most obvious thing is observable reality. I start with finding my bearings in reality. Recently, I gave an example of that: check ‘My individual square of land, 9 meters on 9’  . I look around and I assess what kind of phenomena, which, at this stage of research, I can intuitively connect to the general topic of my research, and which I can observe, measure, and communicate intelligibly about. These are my anchors in reality.

I look at those things, I measure them, and I do my best to communicate by observations to other people. This is when the Ockham’s razor is put to an ex post test: if the shave has been really neat, other people can easily understand what I am communicating. If I and a bunch of other looneys (oops! sorry, I wanted to say ‘scientists’) can agree on the current reading of the density of population, and not really on the reading of unemployment (‘those people could very well get a job! they are just lazy!), then the density of population is our Ockham’s razor, and unemployment not really (I love the ‘not really’ expression: it can cover any amount of error and bullshit). This is the right moment for distinguishing the obvious from the dubious, and to formulate my first hypotheses, and then I move backwards the long of the Milton Friedman’s four levels of hypothesising. The empirical application of the Ockham’s razor has allowed me to define what I can actually check in real life, and this, in turn, allows distinguishing between two big bags, each with hypotheses inside. One bag contains the verifiable hypotheses, the other one is for the speculative ones, i.e. those non-verifiable.

Anyway, I want my students to follow a path of research together with me. My first step is to organize the first step on this path, namely to find the fundamental, empirical bearings as for those four topics: smart cities, Fintech, renewable energies and collective intelligence. The topic of smart cities certainly can find its empirical anchors in the prices of real estate, and in the density of population, as well as in the local rate of demographic growth. When these three dance together – once again, you can check ‘My individual square of land, 9 meters on 9’ – the business of building smart cities suddenly gets some nice, healthy, reddish glow on its cheeks. Businesses have cheeks, didn’t you know? Well, to be quite precise, businesses have other cheeks. The other cheek, in a business, is what you don’t want to expose when you already get hit somewhere else. Yes, you could call it crown jewels as well, but other cheek sounds just more elegantly.

As for Fintech, the first and most obvious observation, from my point of view, is diversity. The development of Fintech calls into existence many different frameworks for financial transactions in times and places when and where, just recently, we had just one such framework. Observing Fintech means, in the first place, observing diversity in alternative financial frameworks – such as official currencies, cryptocurrencies, securities, corporations, payment platforms – in the given country or industry. In terms of formal analytical tools, diversity refers to a cross-sectional distribution and its general shape. I start with I taking a convenient denominator. The Gross Domestic Product seems a good one, yet you can choose something else, like the aggregate value of intellectual property embodied in selfies posted on Instagram. Once you have chosen your denominator, you measure the outstanding balances, and the current flows, in each of those alternative, financial frameworks, in the units of your denominator. You get things like market capitalization of Ethereum as % of GDP vs. the supply of US dollar as % of its national GDP etc.

I pass to renewable energies, now. When I think about what is the most obviously observable in renewable energies, it is a dual pattern of development. We can have renewable sources of energy supplanting fossil fuels: this is the case in the developed countries. On the other hand, there are places on Earth where electricity from renewable sources is the first source of electricity ever: those people simply didn’t have juice to power their freezer before that wind farm started up in the whereabouts. This is the pattern observable in the developing countries. In the zone of overlapping, between those two patterns, we have emerging markets: there is a bit of shifting from fossils to green, and there is another bit of renewables popping up where nothing had dared to pop up in the past. Those patterns are observable as, essentially, two metrics, which can possibly be combined: the final consumption of energy per capita, and the share of renewable sources in the final consumption of energy. Crude as they are, they allow observing a lot, especially when combined with other variables.

And so I come to collective intelligence. This is seemingly the hardest part. How can I say that any social entity is kind of smart? It is even hard to say in humans. I mean, virtually everybody claims they are smart, and I claim I’m smart, but when it comes to actual choices in real life, I sometimes feel so bloody stupid… Good, I am getting a grip. Anyway, intelligence for me is the capacity to figure out new, useful things on the grounds of memory about old things. There is one aspect of that figuring out, which is really intriguing my internal curious ape: the phenomenon called ultra-socialisation, or supersocialisation. I am inspired, as for this one, by the work of a group of historians: see ‘War, space, and the evolution of Old World complex societies’ (Turchin et al. 2013[1]). As a matter of fact, Jean Jacques Rousseau, in his “Social Contract”, was chasing very much the same rabbit. The general point is that any group of dumb assholes can get social on the level of immediate gains. This is how small, local societies emerge: I am better at running after woolly mammoths, you are better at making spears, which come handy when the mammoth stops running and starts arguing, and he is better at healing wounds. Together, we can gang up and each of us can experience immediate benefits of such socialisation. Still, what makes societies, according to Jean Jacques Rousseau, as well as according to Turchin et al., is the capacity to form institutions of large geographical scope, which require getting over the obsession of immediate gains and provide long-term, developmental a kick. What is observable, then, are precisely those institutions: law, state, money, universally enforceable contracts etc.

Institutions – and this is the really nourishing takeaway from that research by Turchin et al. (2013[2]) – are observable as a genetic code. I can decompose institutions into a finite number of observable characteristics, and each of them can be observable as switched on, or switched off. Complex institutional frameworks can be denoted as sequences of 1’s and 0’s, depending on whether the given characteristics is, respectively, present or absent. Somewhere between the Fintech, and collective intelligence, I have that metric, which I found really meaningful in my research: the share of aggregate depreciation in the GDP. This is the relative burden, imposed on the current economic activity, due to the phenomenon of technologies getting old and replaced by younger ones. When technologies get old, accountants accounts for that fact by depreciating them, i.e. by writing off the book a fraction of their initial value. All that writing off, done by accountants active in a given place and time, makes aggregate depreciation. When denominated in the units of current output (GDP), it tends to get into interesting correlations (the way variables can socialize) with other phenomena.

And so I come with my observables: density of population, demographic growth, prices of real estate, balances and flows of alternative financial platforms expressed as percentages of the GDP, final consumption of energy per capita, share of renewable energies in said final consumption, aggregate depreciation as % of the GDP, and the genetic code of institutions. What I can do with those observables, is to measure their levels, growth rates, cross-sectional distributions, and, at a more elaborate level, their correlations, cointegrations, and their memory. The latter can be observed, among other methods, as their Gaussian vector autoregression, as well as their geometric Brownian motion. This is the first big part of my educational product. This is what I want to teach my students: collecting that data, observing and analysing it, and finally to hypothesise on the grounds of basic observation.

[1] Turchin P., Currie, T.E.,  Turner, E. A. L., Gavrilets, S., 2013, War, space, and the evolution of Old World complex societies, Proceedings of The National Academy of Science, vol. 110, no. 41, pp. 16384 – 16389

[2] Turchin P., Currie, T.E.,  Turner, E. A. L., Gavrilets, S., 2013, War, space, and the evolution of Old World complex societies, Proceedings of The National Academy of Science, vol. 110, no. 41, pp. 16384 – 16389

Inside a vector

My editorial

I am returning to the issue of collective memory, and to collective memory recognizable in numbers, i.e. in the time series of variables pertinent to the state of a society (see ‘Back to blogging, trying to define what I remember’ ). And so I take my general formula xi(t) = f1[xi(t – b)] + f2[xi(t – STOCH)] + Res[xi(t)], which means that any given moment ‘t’, current information xi(t) about the social system consists in some sort of constant-loop remembering xi(t – b), with ‘b’ standing for that fixed temporal window (in an average human it seems to be like 3 weeks), coming along with more irregular, stochastic a pick of past information, like [xi(t – STOCH)], and on the top of all that is the residual Res[xi(t)] of current information, hardly attributable to any remembering of the past, and, fault of a better expression, it can be grasped as the strictly spoken present.

I am reviewing the available mathematical tools for modelling such a process with hypothetical memory. I start with something that I could label ‘perfect remembering and only remembering’, or the Gaussian process. It represents a system, which essentially does not learn much, and is predictable on the grounds of its mean and covariance. When I do linear regression, which you could have seen a lot in my writings on this blog, I more or less consciously follow the logic of a Gaussian process. That logic is simple: if I can draw a straight line that matches the empirical distribution of my real-life variable, and if I prolong this line into the future, it will make a good predictor of the future values in my variable. It doesn’t even have to be one variable. I can deal with a vector made of many variables as well. As a matter of fact, the mathematical notation used in the Gaussian process basically refers to vectors of variables. It might be the right moment for explaining what the hell is a vector in quantitative analysis. Well, I am a vector, and you, my reader, you are a vector, and my cousin is a vector as well, and his dog is a vector. My phone is a vector, and any other phone the same. Anything we encounter in life is complex. There are no simple phenomena, even in the middle of summer holidays, on some remote tropical beach. Anything we can think of has many characteristics. To the extent that those characteristics can be represented as numbers, the state of nature at a given moment is a set of numbers. These numbers can be considered as coordinates in many criss-crossing manifolds. I have an age in the manifold of ages, a height in the manifold of heights, and numerically expressible a hair colour in the manifold of hair colours etc. Many coordinates make a vector, stands to reason.

And so I have that vector X* made of n variables, stretched over m periods of time. Each point in that vector is characterized by its appurtenance to the precise variable i out of those n variables, as well as its observability at a given moment j out of the total duration. It can look more or less like that: X*= {Xt1,1, Xt2,2, …, Xtj,i, Xtm,n} , or, in a more straightforward form of a matrix, it is something like:

                                   Moments in time (or any indexed value you want)           

                                                    t1                     t2                     …                     tj                      tm

| Variables        I          Xt1,I               Xt2,I                   …                   Xtj,I                Xtm,I

X* |                          II        Xt1,II              Xt2,II                    …                   Xtj,II             Xtm,II

|                          …

|                         n         Xt1,n              Xt2,n                    …                   Xtj,n               Xtm,n

 

Right, I got myself lost a bit in that vector thing, and I kind of stepped aside the path of wisdom regarding the Gaussian process. In order to understand the logic of the Gaussian process, you’d better revise the Gaussian distribution, or, in other words, the normal distribution. If any set of observable data follows the normal distribution, the values you can encounter the most frequently in it are those infinitely close to the arithmetical average of the set. As you probably remember from your maths class at high school, one of the reasons the arithmetical average is so frequently used in all kinds of calculations (even those pretty intuitive ones) is that it doesn’t exist. If you take any set of data and compute its arithmetical average, none of your empirical observations will be exactly equal to that average. Still, and this is really funny, you have things – especially those occurring in large amounts, like foot size in a human population – which take the most frequently those numerical values, which are relatively the closest to their arithmetical average, i.e. the closest to a value that doesn’t exist, and yet is somehow expected. These things follow the Gaussian (normal) distribution and we use to assume that their expected value (i.e. the value we can rightfully expect to meet the most frequently in those things) is their arithmetical average.

Inside the set of all those Gaussian things, there is a smaller subset of things, for which time matters. These phenomena unfold in time. Foot size is a good example. Instead of asking yourself what foot size you are the most likely to satisfy with the shoes you make for the existing population, you can ask about the expected foot size in any human being to be born in the future. What you can do is to measure the average foot size in the population year after year, like over one century. That would be a lot of foot measuring, I agree, but science requires some effort. Anyway, if you measure average foot sizes, year after year during one century, you can discover that those averages follow a normal distribution over time, i.e. the greatest number of annual averages tends to be infinitely close to the general, century-long average. If this is the case, we can say that the average foot size changes over time in a Gaussian process, and this is the first characteristic of this specific process: the mean is always the expected value.

If I apply this elementary assumption to the concept of collective intelligence, it implies a special aspect of intelligence, i.e. generalisation. My eyes transmit to my brain the image of one set of colourful points, and then the image of another set of points, kind of just next to the previous one. My brain connects those dots and labels them ‘woman’, ‘red’, ‘bag’ etc. In a sense, ‘woman’, ‘red’, and ‘bag’ are averages, because they are the incidences I expect to find the most probably in the presence of those precise kinds of colourful points. Thus, collective intelligence endowed with a memory, which works according to a Gaussian process, is the kind of intelligence we use for establishing our basic distinctions. In our collective intelligence, Gaussian processes (if they happen at all), can represent, for example, the formation of cultural constructs such as law, justice, scientific laws, and, by the way, concepts like the Gaussian process itself.

Now, we go one step further, and, in order to do it, we need to go one step back, namely back to the concept of vector. If my process in time is made of vectors, instead of single points, and each vector is like a snapshot of reality at a given moment, I am interested in something called the covariance of variables inside the vector. If one variable deviates from its own mean, and I make it power 2 in order to get rid of the possibly embarrassing minus sign, I have variance. If I have two variables, and I take their respective, local deviations from their means, and I multiply those deviations by each other, I have covariance. As we are talking vectors, we have a whole matrix of covariance, between each pair of variables in the vector. Any process, unfolding in time and involving many variables, has to answer the existential question about its own matrix of covariance. Some processes have the peculiar property of keeping a pretty repetitive matrix of covariance over time. The component, simple variables of those processes change in some sort of constant-pace contredans. If variable X1 changes by one inch, the variable X2 will change by three quarters of a gallon, and so it will reproduce for a long time. This is the second basic characteristic of a Gaussian process: future covariance is predictable on the grounds of the covariance observed so far.

As I am transplanting that concept of very recurrent covariance onto my idea of collective intelligence with memory, Gaussian collective intelligence would be the kind that establishes recurrent functional connections between things of society. We call those things institutions. Language, as a matter of fact, is an institution, as well. As we have institutions in every society, and societies that do not form institutions tend to have pretty short a life expectance, we can assume that collective intelligence certainly follows, at least to some extent, the pattern of a Gaussian process.

Back to blogging, trying to define what I remember

My editorial

It is really tough to get back to regular blogging, after a break of many weeks. This is interesting. Since like mid-October, I have been absorbed by teaching and by finishing formal scientific writing connected to my research grant. I have one major failure as for that last one – I haven’t finished my book on technological change and renewable energies. I have like 70% of it and it keeps being like 70%, as if I was blocked on something. Articles flow just smoothly, but I am a bit stuck with the book. Another interesting path for self-investigation. Anyway, teaching and formal writing seem to have kind of absorbed some finite amount of mental energy I have, leaving not much for other forms of expression, blogging included. Now, as I slowly resume both the teaching scheduled for the winter semester, and, temporarily, the writing of formal publications, my brain seems to switch, gently, back into the blogging mode.

When I start a new chapter, it is a good thing to phrase out my takeaways from previous chapters. I think there are two of them. Firstly, it is the concept of intelligent loop in collective learning. I am mildly obsessed with the phenomenon of collective intelligence, and, when we claim we are intelligent, it would be good to prove we can learn something as a civilisation. Secondly, it is that odd mathematical construct that we mostly know, in economics, as production function. The longer I work with that logical structure, you know, the ‘Y = Kµ*L1-µ*A’ one (Cobb, Douglas 1928[1]), the more I am persuaded that – together with some fundamental misunderstandings, there is an enormous cognitive potential in it. The production function is a peculiar way of thinking about social structures, where one major factor – the one with the biggest exponent –  reshuffles all the cards on the table and actually makes the structure we can observe.

The loop of intelligent learning articulates into a few specific, more detailed issues. In order to learn, I have to remember what happened to me. I need to take some kind of break from experiencing reality – the capacity of abstract thinking is of great help in this respect – and I need to connect the dots, form some patterns, test them, hopefully survive the testing, and then come up with something smart, which I can label as my new skills. Collective memory is the first condition of collective learning. There is one particular issue, pertaining to both the individual memory and the collective one: what we think is our memory of past occurrences is, in fact, our present interpretation of information we collected in the past. It is bloody hard to draw a line between what we really remember, and what we think we remember. There are scientifically defined cases of mental disturbances (e.g. the Korsakoff psychosis), where the person creates its own memory on a completely free ride, without any predictable connection to what had really happened in the past. If individual people can happen to do things like that, there is absolutely no reason why whole societies shouldn’t. Yet, when it comes to learning, looping inside our own imagination simply doesn’t work: we come up with things like Holy Inquisitions or Worst Enemies, and it is not what is going to drive us into the next millennium. In order to learn truly and usefully, we have to connect the dots from our actual, past experience, as little edited as possible. The question is, how can I find out what does the society really remember? How can I distinguish it from the imaginary bullshit? Economics are very largely about numbers. The general question about memory translates into something more abstract: how can I tell that the numbers I have today somehow remember the numbers from the past? How can I tell that a particular variable remembers its own variance from the past? What about this particular variable remembering the past variance in other variables?

In order to answer those questions, I go back to my understanding of memory as a phenomenon. How do I know I have memory? I am considering a triad of distinct phenomena, which can prove I have memory: remembering, repetition, and modification of behaviour. Remembering means that I can retrieve, somehow, from the current resources of my brain, some record relative to past experience. In other words, I can find past information in the present information. My brain needs recurrent procedures for retrieving that past information. There must be some librarian-like algorithm in my brain, which can pick up bits of my past in a predictable manner. Memory in quantitative data means, thus, that I can find numbers from the past in the present ones, and I can find them in a recurrent manner, i.e. through a function. If I have a number from the past, let’s call it x(t-1), and a present number x(t), my x(t) has the memory of x(t-1) if, in a given set X = {x1, x2, …, x3} of values attributed to x I can find a regularity of the type xi(t) = f[xi(t-1)] and the function f[xi(t-1)] is a true one, i.e. it has some recurrent shape in it. Going from mathematics back to real life, remembering means that every time I contemplate my favourite landscape – an open prairie in the mountains, by a sunny day in summer – I somehow rehearse all the past times I saw the same landscape.

Going back to maths, there are many layers and tunes in remembering. I can remember in a constant time frame. It means that right now, my brain kind of retrieves sensory experience from the past three weeks, the whole of three weeks and just three weeks. That window in time is my constant frame of remembering. Yet, older memories happen to pop up in my head. Sometimes, I go, in my memories, like two years back. On other occasions, something from a moment twenty years ago suddenly visits my consciousness. Besides the constant window of three weeks back in time, my brain uses a flexible temporal filter, when some data from further a past seems to connect with my present experience. Thus, in the present information xi(t), currently processed in my mind, there are two layers of remembering: the one in the constant window of three weeks, on the one hand, and that occurring in the shifting regressive reach. Mathematically, constant is constant, for example ‘b’, whilst something changing is basically a stochastic distribution, which I provisionally call ‘STOCH’, i.e. a range of possible states, with each of them occurring with a certain probability. My mathematical formula gets the following shape: xi(t) = f1[xi(t – b)] + f2[xi(t – STOCH)].

As someone looks at these maths, they could ask: ‘Good, but where is the residual component on the right side? Is your current information just made of remembering? Is there nothing squarely present and current?’. Well, this is a good question. How does my intelligence work? Is there anything else being processed, besides the remembered information? I start with defining the present moment. In the case of a brain, one neuron fires in about 2 milliseconds, although there is research showing that each neuron can largely control that speed and make connections faster or slower (Armbruster & Ryan 2011[2]). Two milliseconds are not that long: they are two thousandths of what we commonly perceive as the shortest unit of time in real life. Right, one neuron doesn’t make me clever, I need more of them getting to do something useful together. How many? As I was attending my lessons at the driving school, some 27 years ago, I had been taught that the normal time of reaction, in a driver, is about 1,5 seconds. This is the time between my first glimpse of a dog crossing the tarmac, and me pushing the brake pedal. It makes 1 500 milliseconds. Divided by two milliseconds for one neuron, so if each individual neuron fired after another one, it gives 750. I have roughly 100 billion neurons in my brain (you the same), and each of them has, on average, 7000 connections with other neurons. It makes 7E+14 synaptic connections. In a sequence of 750 neurons firing one after the other, I have 7507000 synapses firing. In other words, something called ‘strictly current processing of information’ activates like 7,5E-09 of my brain: not much. It looks as if my present wasn’t that important, in quantitative terms, in relation to my past. Moreover, in those 7507000 synapses firing in an on-the-spot reaction of a driver, there is a lot of remembering, like ‘Which of those three things under my feet is the brake pedal?’.

Let’s wrap it up, partially. We are in a social system, and that social system is supposed to have collective intelligence equivalent to the individual intelligence of a human brain. If this is the case, numerical data describing the actions of that social system consists, for any practical purpose, exclusively in remembering. There is some residual of what can be considered as the strictly spoken current processing of information, but this is really negligible. Thus, I come with my first test for collective intelligence in a social system. The system in question is intelligent if, in a set of numerical time series describing its behaviour, the present data can be derived from past data, without significant residual, in a function combining a fixed window of remembering with a stochastic function of reprocessing old information, or xi(t) = f1[xi(t – b)] + f2[xi(t – STOCH)]. If this condition is generally met, the social system remembers enough to learn on past experience. ‘Generally’ means that nuances can be introduced into that general scheme. Firstly, if my function yields a significant residual ‘Res[xi(t)]’, thus if its empirically verified version looks like xi(t) = f1[xi(t – b)] + f2[xi(t – STOCH)] + Res[xi(t)], it just means that my (our?) social system produces some residual information, whose role in collective learning is unclear. It can be the manifestation of super-fast learning on the spot, or, conversely, it can indicate that the social system produces some information bound not to be used for future learning.

And so I come to the more behavioural proof of memory and learning. When we do something right now, there is a component of recurrent behaviour in it, and another one, that of modified behaviour. We essentially do things we know how to do, i.e. we repeat patterns of behaviour that we have formed in the past. Still, if we are really prone to learn, thus to have active a memory, there is another component in our present behaviour: that of current experimentation, and modification in behaviour, in the view of future repetition. We repeat the past, and we experiment for the future. The residual component Res[xi(t)] in my function of memory – xi(t) = f1[xi(t – b)] + f2[xi(t – STOCH)] + Res[xi(t)]if it exists at all, can be attributed to such experimentation. Should it be the case, my Res[xi(t)] should reflect in future functions of memory in my data, and it should reflect in the same basic way as the one already defined. Probably, there is some recurrent cycle of learning, taking place in a more or less constant time window, and, paired with it, is a semi-random utilisation of present experience in the future, occurring in a stochastically varying time range. Following the basic logic, which I am trying to form here, both of the time ranges in that function of modification in behaviour should be pretty.

[1] Charles W. Cobb, Paul H. Douglas, 1928, A Theory of Production, The American Economic Review, Volume 18, Issue 1, Supplement, Papers and Proceedings of the Fortieth Annual Meeting of the American Economic Association (March 1928), pp. 139 – 165

[2] Armbruster, M., Ryan, T., 2011, Synaptic vesicle retrieval time is a cell-wide rather than individual-synapse property, Nature Neuroscience 14, 824–826 (2011), doi:10.1038/nn.2828