Big Black Swans

Oops! Another big break in blogging. Sometimes life happens so fast that thoughts in my head run faster than I can possibly write about them. This is one of those sometimeses. Topics for research and writing abound, projects abound, everything is changing at a pace which proves challenging to a gentleman in his 50ies, such as I am. Yes, I am a gentleman: even when I want to murder someone, I go out of my way to stay polite.

I need to do that thing I periodically do, on this blog. I need to use published writing as a method of putting order in the chaos. I start with sketching the contours of chaos and its main components, and then I sequence and compartmentalize.

My chaos is made of the following parts:

>> My research on collective intelligence

>> My research on energy systems, with focus on investment in energy storage

>> My research on the civilisational role of cities, and on the concept of the entire human civilisation, such as we know it today, being a combination of two productive mechanisms: production of food in the countryside, and production of new social roles in the cities.

>> Joint research which I run with a colleague of mine, on the reproduction of human capital

>> The project I once named Energy Ponds, and which I recently renamed ‘Project Aqueduct’, for the purposes of promoting it.

>> The project which I have just started, together with three other researchers, on the role of Territorial Defence Forces during the COVID-19 pandemic

>> An extremely interesting project, which both I and a bunch of psychiatrists from my university have provisionally failed to kickstart, on the analysis of natural language in diagnosing and treating psychoses

>> A concept which recently came to my mind, as I was working on a crowdfunding project: a game as method of behavioural research about complex decisional patterns.

Nice piece of chaos, isn’t it? How do I put order in my chaos? Well, I ask myself, and, of course, I do my best to answer honestly the following questions: What do I want? How will I know I have what I want? How will other people know I have what I want? Why should anyone bother? What is the point? What do I fear? How will I know my fears come true? How will other people know my fears come true? How do I want to sequence my steps? What skills do I need to learn?

I know I tend to be deceitful with myself. As a matter of fact, most of us tend to. We like confirming our ideas rather than challenging them. I think I can partly overcome that subjectivity of mine by interweaving my answers to those questions with references to published scientific research. Another way of staying close to real life with my thinking consists in trying to understand what specific external events have pushed me to engage in the different paths, which, as I walk down all of them at the same time, make my personal chaos.

In 2018, I started using artificial neural networks, just like that, mostly for fun, and in a very simple form. As I observed those things at work, I developed a deep fascination with intelligent structures, and just as deep (i.e. f**king hard to phrase out intelligibly) an intuition that neural networks can be used as simulators of collectively intelligent social structures.

Both of my parents died in 2019, exactly at the same age of 78, having spent the last 20 years of their respective individual lives in complete separation from each other to the point of not having exchanged a spoken or written word over those last 20 years. That changed my perspective as regards subjectivity. I became acutely aware how subjective I am in my judgement, and how subjective other people are, most likely. Pandemic started in early 2020, and, almost at the same moment, I started to invest in the stock market, after a few years of break. I had been learning at an accelerated pace. I had been adapting to the reality of high epidemic risk – something I almost forgot since I had a devastating scarlatina at the age of 9 – and I had been adapting to a subjectively new form of economic decisions (i.e. those in the stock market). That had been the hell of a ride.

Right now, another piece of experience comes into the game. Until recently, in my house, the attic was mine. The remaining two floors were my wife’s dominion, but the attic was mine. It was painted in joyful, eye-poking colours. There as a lot of yellow and orange. It was mine. Yet, my wife had an eye for that space. Wives do, quite frequently. A fearsome ally came to support her: an interior decorator. Change has just happened. Now, the attic is all dark brown and cream. To me, it looks like the inside of a coffin. Yes, I know what the inside of a coffin looks like: I saw it just before my father’s funeral. That attic has become an alien space for me. I still have hard times wrapping my mind around how shaken I am with that change. I realize how attached am I to the space around me. If I am so strongly bound to colours and shapes in my vicinity, other people probably feel the same, and that triggers another intuition: we, humans, are either simple dwellers in the surrounding space, or we are architects thereof, and these are two radically different frames of mind.

I am territorial as f**k. I have just clarified it inside my head. Now, it is time to go back to science. In a first step, I am trying to connect those experiences of mine to my hypothesis of collective intelligence. Step by step, I am phrasing it out. We are intelligent about each other. We are intelligent about the physical space around us. We are intelligent about us being subjective, and thus we have invented that thing called language, which allows us to produce a baseline for intersubjective description of the surrounding world.

I am conscious of my subjectivity, and of my strong emotions (that attic, f**k!). Therefore, I want to see my own ideas from other people’s point of view. Some review of literature is what I need. I start with Peeters, M. M., van Diggelen, J., Van Den Bosch, K., Bronkhorst, A., Neerincx, M. A., Schraagen, J. M., & Raaijmakers, S. (2021). Hybrid collective intelligence in a human–AI society. AI & SOCIETY, 36(1), 217-238. https://doi.org/10.1007/s00146-020-01005-y . I start with it because I could lay my hands on an open-access version of the full paper, and, as I read it, many bells ring in my head. Among all those bells ringing, the main one refers to the experience I had with the otherwise simplistic neural networks, namely the perceptrons possible to structure in an Excel spreadsheet. Back in 2018, I was observing the way that truly a dumb neural network was working, one made of just 6 equations, looped together, and had that realization: ‘Blast! Those six equations together are actually intelligent’. This is the core of that paper by Peeters et al.. The whole story of collective intelligence had become a thing when Artificial Intelligence started spreading throughout society, thus it is generally the same thing in scientific literature as it is with me individually. Conscious, inquisitive interaction with artificial intelligence seems to awaken an entirely new worldview, where we, humans, can see at work an intelligent fruit of our own intelligence.

I am trying to make one more step, from bewilderment to premises and hypotheses. In Peeters et al., three big intellectual streams are named: (1) the technology-centric perspective (2) the human-centric one, and finally (3) that focused on the concept of collective intelligence-centric perspective. The third one sounds familiar, and so I dig into it. The general idea here is that humans can put their individual intelligences into a kind of interaction which is smarter than those individual ones. This hypothesis is a little counterintuitive – if we consider electoral campaigns or Instagram – but it becomes much more plausible when we think about networks of inventors and scientists. Peeters et al. present an interesting extension to that, namely collectively intelligent agglomerations of people and technology. This is exactly what I do when I do empirical research and use a neural network as simulator, with quantitative data in it. I am one human interacting with one simple piece of AI, and interesting things come out of it. 

That paper by Peeters et al. cites a book: Sloman, S., Sloman, S. A., & Fernbach, P. (2018). The knowledge illusion: Why we never think alone (Penguin). Before I pass to my first impressions about that book, another sidekick. In 1993, one of the authors of that book, Aaron Sloman, wrote an introduction to another book, this one being a collection of proceedings (conference papers in plain human lingo) from a conference, grouped under the common title: Prospects for Artificial Intelligence (Hogg & Humphreys 1993[1]). In that introduction, Aaron Sloman claims that using Artificial Intelligence as simulator of General Intelligence requires a specific approach, which he calls ‘design-based’, where we investigate the capabilities and the constraints within which intelligence, understood as a general phenomenon, has to function. Based on those constraints, requirements can be defined, and, consequently, the way that intelligence is enabled to meet them, through its own architecture and mechanisms.  

We jump 25 years, from 1993, and this is what Sloman, Sloman & Fernbach wrote in the introduction to “The knowledge illusion…”: “This story illustrates a fundamental paradox of humankind. The human mind is both genius and pathetic, brilliant and idiotic. People are capable of the most remarkable feats, achievements that defy the gods. We went from discovering the atomic nucleus in 1911 to megaton nuclear weapons in just over forty years. We have mastered fire, created democratic institutions, stood on the moon, and developed genetically modified tomatoes. And yet we are equally capable of the most remarkable demonstrations of hubris and foolhardiness. Each of us is error-prone, sometimes irrational, and often ignorant. It is incredible that humans are capable of building thermonuclear bombs. It is equally incredible that humans do in fact build thermonuclear bombs (and blow them up even when they don’t fully understand how they work). It is incredible that we have developed governance systems and economies that provide the comforts of modern life even though most of us have only a vague sense of how those systems work. And yet human society works amazingly well, at least when we’re not irradiating native populations. How is it that people can simultaneously bowl us over with their ingenuity and disappoint us with their ignorance? How have we mastered so much despite how limited our understanding often is?” (Sloman, Steven; Fernbach, Philip. The Knowledge Illusion (p. 3). Penguin Publishing Group. Kindle Edition)

Those readings have given me a thread, and I am interweaving that thread with my own thoughts. Now, I return to another reading, namely to “The Black Swan” by Nassim Nicolas Taleb, where, on pages xxi – xxii of the introduction, the author writes: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. I stop and summarize the triplet: rarity, extreme impact, and retrospective (though not prospective) predictability.fn3 A small number of Black Swans explain almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives. Ever since we left the Pleistocene, some ten millennia ago, the effect of these Black Swans has been increasing. It started accelerating during the industrial revolution, as the world started getting more complicated, while ordinary events, the ones we study and discuss and try to predict from reading the newspapers, have become increasingly inconsequential”.

I combine the idea by Sloman, Sloman & Fernbach, that we, humans, can be really smart collectively by interaction between very limited and subjective individual intelligences, with the concept of Black Swan by Nassim Nicolas Taleb. I reinterpret the Black Swan. This is an event we did not expect to happen, and yet it happened, and it has blown a hole in our self-sufficient certainty that we understand what the f**k is going on. When we do things, we expect certain outcomes. When we don’t get the outcomes we expected, this is a functional error. This is local instance of chaos. We take that local chaos as learning material, and we try again, and again, and again, in the expectation of bringing order into the general chaos of existence. Our collectively intelligent learning  

Nassim Nicolas Taleb claims that our culture tends to mask the occurrence of Black Swan-type outliers as just the manifestation of otherwise recurrent, predictable patterns. We collectively internalize Black Swans. This is what a neural network does. It takes an obvious failure – the local error of optimization – and utilizes it as valuable information in the next experimental round. The network internalizes a series of f**k-ups, because it never hits the output variable exactly home, there is always some discrepancy, at least a tiny one. The fact of being wrong about reality becomes normal. Every neural network I have worked with does the same thing: it starts with some substantial magnitude of error, and then it tends to reduce that error, at least temporarily, i.e. for at least a few more experimental rounds.   

This is what a simple neural network – one of those I use in my calculations – does with quantitative variables. It processes data so as to create error, i.e. so as to purposefully create outliers located out of the expected. Those neural networks purposefully create Black Swans, those abnormalities which allow us to learn. Now, what is so collective about neural networks? Why do I intuitively associate my experiments with neural networks to collective intelligence rather than to the individual one? Well, I work with socio-economic quantitative variables. The lowest level of aggregation I use is the probability of occurrence in a specific event, but this is really low aggregation for me. Most of my data is like Gross Domestic Product, average hours worked per person per year, average prices of electricity etc. This is essentially collective data, in the sense that no individual intelligence can possibly be observed as for its density of population or for its inflation. There needs to be a society in place for those metrics to exist at all.

When I work with that type of data, I assume that many people observe and gauge it, then report and aggregate their observations etc. Many people put a lot of work into making those quantitative variables both available and reliable. I guess it is important, then. When some kind of data is that important collectively, it is very likely to reflect some important aspect of collective reality. When I run that data through a neural network, the latter yields a simulation of collective action and its (always) provisional outcomes.

My neural network (I mean the one on my computer, not the one in my head) takes like 0.05 of local Gross Domestic Product, then 0.4 of average consumption of energy per capita, maybe 0.09 of inflation in consumer prices, plus some other stuff in random proportions, and sums up all those small portions of whatever is important as collectively measured socio-economic outcome. Usually, that sum is designated as ‘h’, or vector of aggregate input. Then, my network takes that h and puts it into a neural activation which, in most cases, is the hyperbolic tangent hyperbolic tangent AKA (e2h – 1) / (e2h +1). When we learn by trial and error, a hypothetical number e2h measures the force which the neural network reacts with to a complex stimulation from a set of variables xi. The ‘e2’ part of that hypothetical reaction is constant and equals e2 = 7,389056099, whilst h is a variable parameter, specific to the given phenomenal occurrence. The parameter h is roughly proportional to the number of variables in the source empirical set. The more complex the reality I process with the neural network, i.e. the more variables I split my reality into, the greater is the value of h. In other words, the more complexity, the more is the neuron, based on the expression e2h, driven away from its constant root e2. Complexity in variables induces greater swings in the hyperbolic tangent, i.e. greater magnitudes of error, and, consequently, longer strides in the process of learning.         

Logically, the more complex social reality I represent with quantitative variables, the bigger Black Swans the neural network produces as it tries to optimize one single variable chosen as the desired output of the neurally activated input.   


[1] Hogg, D., & Humphreys, G. W. (1993). Prospects for Artificial Intelligence: Proceedings of AISB’93, 29 March-2 April 1993, Birmingham, UK (Vol. 17). IOS Press.