The batteries we don’t need anymore

I continue on the thread I started to develop in my last update in French, titled ‘De quoi parler à la prochaine réunion de faculté’, i.e. I am using that blog, and the fact of writing, to put some order in the almost ritual mess that happens at the beginning of the academic year. New calls for tenders start in the ministerial grant programs, new syllabuses need to be prepared, new classes start. Ordinary stuff, mind you, this is just something about September, as if I were in Vivaldi’s ‘Four seasons’: the hot, tumultuous Summer slowly folds into the rich, textured, and yet implacably realistic Autumn.

My central idea is to use some of the science which I dove into during the summer holidays as an intellectual tool for putting order in that chaos. That almost new science of mine is mostly based on the theory of complex systems, and my basic claim is that technological change is an emergent phenomenon in complex social systems. We don’t know why exactly our technologies change the way they change. We can trace the current technologies back to their most immediate ancestors and sometimes we can predict their most immediate successors, but that’s about it. Futuristic visions of technologies that could be there in 50 years from now are already some kind of traditional entertainment. The concept of technological progress, when we try to find a developmental logic in the historically known technological change, is usually standing on wobbly legs, on the other hand. Yes, electricity allowed the emergence of medical technologies used in hospitals, and that saved a lot of human lives, but there is no way Thomas Edison could have known that. The most spectacular technological achievements of mankind, such as the Egyptian pyramids, the medieval cathedrals, the Dutch windmills from the 16th century, or the automobile, seen from the historical distance, look ambiguous. Yes, it all solved some problems, but it facilitated the emergence of new problems. The truly unequivocal benefit of those technological leaps, which could have been actually experienced by the people who made them, was to learn how to develop technologies.

The studies I did during the Summer holidays 2021 focused on four essential, mathematical models of emergent technological change: cellular automata, flock of birds AKA particle swarm, ants’ nest, and imperfect Markov chains. I start with passing in review the model of cellular automata. At any given moment, the social complexity can be divided into a finite number of social entities (agents). They can be individual humans, businesses, NGOs, governments, local markets etc. Each such entity has an immediate freedom of movement, i.e. a finite number of one-step moves. The concept is related to the theory of games and corresponds to what happens in real life. When we do something social, we seldom just rush forwards. Most frequently, we make one step, observe the outcomes, adjust, then we make the next step etc. When all social agents do it, the whole social complexity can be seen as a collection of cells, or pixels. Each such cell (pixel) is a local state of being in society. A social entity can move into that available state, or not, at their pleasure and leisure. All the one-step moves a social entity can make translate into a trajectory it can follow across the social space. Collective outcomes we strive for and achieve can be studied as temporary complex states of those entities following their respective trajectories. The epistemological trick here is that individual moves and their combinations can be known for sure only ex post. All we can do ex ante is to define the possible states, and just wait where does the reality go.

As we are talking about the possible states of social complexity, I found an interesting mathematical mindf**k at quite an unexpected source, namely in the book titled ‘Aware. The Science and Practice of Presence. The Groundbreaking Meditation Practice’ by Daniel J. Siegel [Penguin Random House LLC, 2018, Identifiers: LCCN 2018016987 (print), LCCN 2018027672 (ebook), ISBN 9780143111788, ISBN 9781101993040 (hardback)]. This is a mathematical way of thinking, apparently taken from quantum physics. Here is the essence of it. Everything that happens does so as 100% probability of the given thing happening. Each phenomenon which takes place is the actualization of the same phenomenon being just likely to happen.

Actualization of probability can be seen as collision of two vehicles in traffic. When the two vehicles are at a substantial distance from each other, the likelihood of them colliding is zero, for all practical purposes. As they converge towards each other, there comes a point when they become sort of provisionally entangled, e.g. they find themselves heading towards the same crossroads. The probability of collision increases slightly, and yet it is not even the probability of collision, it is just the probability that these two might find themselves in a vicinity conducive to a possible collision. Nothing to write home about, yet, like really. It can be seen as a plateau of probability slowly emerging out of the initial soup of all the things which can possibly happen.

As the two cars drive closer and closer to the crossroads in question, the panoply of possible states narrows down. There is a very clear chunk of reality which gains in likelihood, as if it was a mountain range pushing up from the provisional plateau. There comes a point where the two cars (and their drivers) just come on collision course, and there is no way around it, and this is a peak of 100% probability. Boom! Probability is being consumed.

What do those cars have in common with meditation and with the emergence of technological change? As regards meditation, thought can be viewed as a progressively emerging actualization of something that was just a weak probability, sort of a month ago it was just weakly probable that today I would think what I think, it became much more likely yesterday, as the thoughts from yesterday have an impact on the thoughts of today, and today it all comes to fruition, i.e. to the 100% probability. As regards emergent technological change, the way technology changes today can be viewed as actualization of something that was highly probable last year, just somehow probable 10 years ago, and had been just part of the amorphous soup of probability 30 years ago. Those trajectories followed by individual agents inside social complexity, as defined in the theory of cellular automata, are entangled together precisely according to that pattern of emergent probabilities. Two businesses coming up with two mutually independent, and yet similar technologies, are like two peak actualizations of 100% probability in a plateau of probable technological change, which, in turn, has been slowly emerging for some time.

Those other theories I use explain and allow to model mathematically that entanglement. The theory of particle swarm, pertinent to flocks of birds, assumes that autonomous social agents strive for a certain level of behavioural coupling. We expect some level of predictability from others, and we can cooperate with others when we are satisfactorily predictable in our actions. The strive for social coherence is, therefore, one mechanism of entanglement between individual trajectories of cellular automata. The theory of ants’ nest focuses on a specific category of communication systems in societies, working like pheromones. Ants organize by marking, reinforcing and following paths across their environment, and their pheromones serve as markers and reinforcement agents for those paths. In human societies, there are social pheromones. Money and financial markets make probably the most obvious example, but scientific publications are another one. The more scientific articles are being published on a given topic, the more likely are other articles being written on the same topic, until the whole thing reaches a point of saturation, when some ants (pardon me, scientists) start thinking about another path to mark with intellectual pheromones.

Cool. I have (OK, we have) complex social states, made of entangled probabilities that something specific happens, and they encompass technology. Those complex states change, i.e. one complex state morphs into another. Now, how the hell can I know, as a researcher, what is happening exactly? Such as the theory of complex systems suggests it, I can never know exactly, for one, and I need to observe, for two. As I don’t know exactly what is it exactly, that thing which I label ‘technological change’, it is problematic to set too many normative assumptions as for which specific path that technological change should take. I think this is the biggest point of contention as I apply my theory, such as I have just outlined it, to my main field of empirical research, namely energy economics, and technological change in the sector of energy. The more I do that research, the more convinced I am that the so-called ‘energy policies’, ‘climate policies’ etc. are politically driven bullshit based on wishful thinking, with not much of a chance to bring the positive change we expect. I have that deep feeling that setting a strategy for future innovations in our business/country/world is very much like that Polish expression ‘sharing the skin of a bear which is still running in the woods’. First, you need to kill the bear, only then you can bicker about who takes what part of the skin. In the case of innovation, long-term strategies in that domain consist in predicting what we will do when we have something we don’t even know yet what is it exactly.

I am trying to apply this general theory in the grant applications which I am in charge of preparing now, and in my teaching. We have that idea, at the faculty, to apply for funding to study the market of electric vehicles in Europe and in Poland. This is an interesting situation as regards business models. In the US, the market of electric cars is clearly divided among three categories of players. There is Tesla, which is a category and an industry in itself, with its peculiar strategy of extreme vertical integration. Then there are the big, classical car makers, such as Toyota, General Motors etc., with their business models based on rather a short vertical chain of value added inside the business, and a massive supply chain upstream of the house. Finally, there is a rising tide of small start-ups in the making of electric vehicles. I wonder what I could be in Europe. As our European market of electric vehicles is taking off, it is dominated by the incumbent big manufacturers, the old school ones, with Tesla building a factory in Germany, and progressively building a beachhead in the market. There is some timid movement towards small start-up businesses in the field, but it is really timid. In my home country, Poland, the most significant attempt at starting up an electric vehicle made in Poland is a big consortium of state-controlled companies, running under the name of ‘Electromobility Poland’.  

I have that intuition, which I provisionally express as a working hypothesis, namely that business models are an emergent property of technologies which they use. As regards the market of electric vehicles, it means that Tesla’s business model is not an accidental explosion of Elon Musk’s genius mind: it is an emergent characteristic of the technologies involved.

Good. I have some theory taking shape, nice and easy. I let it ripen a bit, and I start sniffing around for facts. What is a business model, in my mind? It is the way of operating the chain of value added, and getting paid for it, in the first place. Then, it is the way of using capital. I noticed that highly innovative environments force businesses to build up and keep large amounts of cash money, arguably to manage the diverse uncertainties emerging as technologies around morph like hell. In some cases, e.g. in biotech, the right business model for rapid innovation is a money-sucker, with apparently endless pay-ins of additional equity by the shareholders, and yet with a big value in terms of technological novelty created. I can associate that phenomenon of vacuum cleaning equity with the case of Tesla, who just recently started being profitable, and had gone through something like a decade in permanent operational loss. That is all pertinent to fixed costs, thus to the cash we need to build up and keep in place the organizational structure required for managing the value chain the way we want to manage it.

I am translating those loose remarks of mine into observable phenomena. Everything I have just mentioned is to be found in the annual financial reports. This is my first source of information. When I want to study business models in the market of electric vehicles, I need to look into financial and corporate reports of businesses active in the market. I need to look into the financial reports of Mercedes Benz, BMW, Renault, PSA, Volkswagen, Fiat, Volvo, and Opel – thus the European automotive makers – and see how it is going, and whether whatever is going on can be correlated with changes in the European market of electric vehicles. Then, it is useful to look into the financial reports of global players present in the European market, e.g. Tesla, Toyota, Honda and whatnot, just to see what changes in them as the European market of electric vehicles is changing.

If my intuition is correct, i.e. if business models are truly an emergent property of technologies used, the fact of engaging into the business of electric vehicles should be correlated with some sort of recurrent pattern in those companies.         

Good. This is about the big boys in the playground. Now, I turn toward the small ones, the start-up businesses. As I already said, it is not like we have a crowd of them in the European industry of electric vehicles. The intuitive axis of research which comes to my mind is to look at start-ups active in the U.S., study their business models, and see if there is any chance of something similar emerging in Europe. Somehow tangentially to that, I think it would be interesting to check whether the plan of Polish government regarding ‘Electromobility Poland’, that is the plan to develop it with public and semi-public money, and then sell it to private investors, has any grounds and under what conditions it can be a workable plan.

Good. I have rummaged a bit in my own mind, time to do the same to other people. I mean, I am passing to reviewing the literature. I type ‘electric vehicles Europe business model’ at the platform, and I look at what’s popping up. Here comes the paper by Pardo-Bosch, F., Pujadas, P., Morton, C., & Cervera, C. (2021). Sustainable deployment of an electric vehicle public charging infrastructure network from a city business model perspective. Sustainable Cities and Society, 71, 102957., . The abstract says: ‘The unprecedented growth of global cities together with increased population mobility and a heightened concern regarding climate change and energy independence have increased interest in electric vehicles (EVs) as one means to address these challenges. The development of a public charging infrastructure network is a key element for promoting EVs, and with them reducing greenhouse gas emissions attributable to the operation of conventional cars and improving the local environment through reductions in air pollution. This paper discusses the effectiveness, efficiency, and feasibility of city strategic plans for establishing a public charging infrastructure network to encourage the uptake and use of EVs. A holistic analysis based on the Value Creation Ecosystem (VCE) and the City Model Canvas (CMC) is used to visualise how such plans may offer public value with a long-term and sustainable approach. The charging infrastructure network implementation strategy of two major European cities, Nantes (France) and Hamburg (Germany), are analysed and the results indicate the need to involve a wide range of public and private stakeholders in the metropolitan areas. Additionally, relevant, and fundamental patterns and recommendations are provided, which may help other public managers effectively implement this service and scale-up its use and business model.

Well, I see there is a lot of work to do, as I read that abstract. I rarely find a paper where I have so much to argue with, just after having read the abstract. First of all, ‘the unprecedented growth of global cities’ thing. Actually, if you care to have a look at the World Bank data on urban land ( ), as well as that on urban population ( ), you will see that urbanization is an ambiguous phenomenon, strongly region-specific. The central thing is that cities become increasingly distinct from the countryside, as types of human settlements. The connection between electric vehicles and cities is partly clear, but just partly. Cities are the most obvious place to start with EVs, because of the relatively short distance to travel between charging points. Still, moving EVs outside the cities, and making them functional in rural areas, is the next big challenge.

Then comes the ‘The development of a public charging infrastructure network is a key element for promoting EVs’ part. As I studied the thing in Europe, the network of charging stations, as compared to the fleet of EVs in the streets is so dense that we have like 12 vehicles per charging station on average, across the European Union. There is no way a private investor can have it for their money, when financing a private charging station, with that average density. We face a paradox: there are so many publicly funded charging stations, in relation to the car fleet out there, that private investment gets discouraged. I agree that it could be an acceptable transitory state in the market, although it begs the question whether private charging stations are a viable business in Europe. Tesla has based a large part of its business model in the US precisely on the development of their own charging stations. Is it a viable solution in Europe?

Here comes another general remark, contingent to my hypothesis of business models being emergent on the basis of technologies. Automotive technologies in general, thus the technology of a vehicle moving by itself, regardless the method of propulsion (i.e. internal combustion vs electric) is a combination of two component technologies. Said method of propulsion is one of them, and the other one is the technology of distributing the power source across space. Electric vehicles can be viewed as cousins to tramways and electric trains, with just more pronounced a taste for independence: instead of drinking electricity from a permanent wiring, EVs carry their electricity around with them, in batteries.

As we talk about batteries, here comes another paper in my cursory rummaging across other people’s science: Albertsen, L., Richter, J. L., Peck, P., Dalhammar, C., & Plepys, A. (2021). Circular business models for electric vehicle lithium-ion batteries: An analysis of current practices of vehicle manufacturers and policies in the EU. Resources, Conservation and Recycling, 172, 105658., . Yes, indeed, the advent of electric vehicles creates a problem to solve, namely what to do with all those batteries. I mean two categories of batteries. Those which we need, and hope to acquire easily when the time comes for changing them in our vehicles, in the first place, and those we don’t need anymore and expect someone to take care of them swiftly and elegantly.       

Correlated coupling between living in cities and developing science


I continue working on the hypothesis that technological change that has been going on in our civilisation at least since 1960 is oriented on increasing urbanization of humanity, and more specifically on effective, rigid partition between urban areas and rural ones. I have been meditating on the main threads which I opened up in my previous update entitled ‘City slickers, or the illusion of standardized social roles’. One conclusion comes to my mind, as both an ethical, and a praxeological precept: we, humans, we should really individuate the s**t out of our social roles. Both from the point of individual existence, and that of benefiting to the society we live in, it is of utmost importance to develop unique skillsets in ourselves. Each of us is a distinct experiment in the broad category of ‘human beings’. The more personal development each of us achieves in one’s own individual existence, the further we can advance that local experiment of ours. Standardizing ourselves serves just the purpose of coordination with others, and that of short-term hierarchical advancement. The marginal gains of standardizing our own behaviour tend rapidly towards zero, once we are past the point of efficient coordination.

I think I will be weaving that thought into a lot of my writing. It is one of those cases when science just nails something already claimed as philosophical claim. What science? This time, I will be developing on the science known as ‘swarm theory’, and I will try to bridge between that theory and my meditations on human individuation. The swarm theory – which you can study by yourself by reading, e.g. Xie, Zhang & Yang 2002[1] ; Poli, Kennedy & Blackwell 2007[2] ; Torres 2012[3], and Stradner et al. 2013[4] – takes empirical observations of swarm animals, such as bees, wasps, ants, and applies those observations to the programming of robots and neural networks, just as to studying cooperation in human societies. It is one of those eclectic approaches, hard to squeeze into any particular drawer, in the huge cabinet of science, and this is precisely why I appreciate it so much.

The basic observation of the swarm theory is that collective coordination is based on functional coupling of individual actions. Coupling means that action of the social entity A provokes action in the social entity B, and it can provoke action in three distinct patterns: random, correlated, and coordinated (AKA fixed). Random coupling happens when my action (I am the social entity A) makes someone else do something, but at the moment of performing my action I haven’t the faintest idea how that other person will react. I just know that they are bound to react someone. Example: I walk into a bar and I start asking complete strangers whether they are happy with their lives. Their range of reactions can stretch from politely answering they are truly happy, thank you so much for asking, through telling me to f**k off, all the way up to punching my face.

When I can reasonably predict the type of other people’s reaction to my action, yet I cannot predict it 100% accurately the magnitude of that reaction, it is correlated coupling. Let’s suppose I assign homework to my students. I can reasonably predict their reaction, on a scale. Some will not do their homework (value 0 on the scale), and those who do it will stretch in their accomplishment from just passable to outstanding. I intend to focus a lot on correlated coupling, in the context of collective intelligence, and I will return to that concept. Now, I want to explain the difference between correlated coupling and the third type, the coordinated AKA fixed coupling. The latter means that a given type of behaviour in one social entity always provokes exactly the same kind of reaction in another social entity. Bal dancing, I mean the really trained one, comes as a perfect example here. A specific step in dancer A provokes always the same step in dancer B.

In my update entitled ‘A civilisation of droplets’, I started to outline my theory about the role of correlated coupling in the phenomenon of collective intelligence. Returning to that example of homework which I assign to my students, the assignment I make is a piece of information, and I communicate that piece of information to a group of recipients. Even before the advent of digital technologies, assignment of homework at school had a standardized form. I remember my own school days (long ago): the teacher would open up with something like ‘Attention! This is going to be your homework..’, and then would say the substance of the task(s) to perform or would write it on the blackboard. It was a standardized communication, which provoked, in us, students, non-standardized and yet scalable and predictable reactions. That assignment worked as a portion of some hormone, dropped among potentially receptive entities (students).

The development of social roles in cities works very much through correlated coupling of behaviour. Let’s take the example of the urban job market. People migrate to cities largely because of the career opportunities offered there. If the urban job market worked in random coupling, a job offer communicated to job seekers would have unknown consequences. I run a construction business, I look for staff, I communicate around the corresponding job offers, and I receive job applications from nurses, actors, and professional cooks, but not a single person with credentialed skills in construction. This is a case of random coupling between the substance of jobs I search to staff, and the qualifications of applicants. Let’s suppose I figured out how to train a cook into a construction worker (See? You just make a recipe for that ceiling, just as if you were preparing a sauce, and then you just apply the recipe: the right structure, the right temperature etc. Simple, isn’t it?), and I sign a work contract with that person, and then they call me on their first scheduled day of work just to say they changed their mind and they will not turn up. This is a case of random coupling between contracts and behaviour.   

If the same market of jobs worked in fixed coupling, it would be central planning, which I know perfectly from the times of my childhood and teenage years in the communist Poland. It works as a system of compulsory job assignments and the peculiar thing about that system is that it barely works at all. It was tons of fun, in the communist Poland. People would produce all kinds of fake medical papers in order not to be assigned industrial jobs in factories. The government figured out a system of incentives for those workers: high salaries, additional rations of meat per month etc. Result? A wonderfully blossoming black market of fake job certificates, which would officially certify the given person is a factory worker (= money, meat), whilst the same person would run a small private business on the side.        

It is interesting to study those three possible types of behavioural coupling – the random, the correlated and the fixed – in the field of law and contracts. Let’s suppose that me and you, my readers, we sign a business partnership contract. In the random coupling version, the contract would cover just some most essential provisions, e.g. duration and scope, and would give maximum freedom in any other respect. It is like ‘We do business together and we figure the thing out as events unfold’. I used to do business in this manner, back in the day, and it falls under every possible proverb about easy ways: the easy way is a way to nowhere, sort of. A good contract needs reasons for being signed, i.e. it needs to bring real order in an otherwise chaotic situation. If the contract just says: ‘We do whatever comes to our mind’, it is not really order, it is still chaos, with just a label on it.

Fixed coupling corresponds to contracts which regulate in great detail every footstep that parties might be willing to make. If you have some business experience, you probably now the kind: a 50-page framework agreement with 50 pages of annexes, and it just gives grounds for signing a case-specific agreement of 50 more pages etc. It is frequently practiced, yet good lawyers know there is a subtle razor edge in that game: if you go too specific, the contract can literally jam. You can get into a situation, when terminating the agreement or going to court are the only solutions logical on the grounds of the contractual wording, and yet completely illogical businesswise. A good contract gives some play to parties, so as they can adapt to the surprising and the unusual. Such flexibility can be created, e.g. through a system of contractual score points. If you buy goods from me, for at least $100 000 a month, I give you 2% of rebate, and if you go beyond $500 000 a month, I give you a rebate of 5% etc.

If we think about life in cities, it is all about social interaction. This is the whole point of living in a city: being in interaction with other people. That interaction is intense and is based on correlated coupling. People tend to flock to those cities, which offer that special kind of predictable freedom. Life in those cities is neither random, nor fixed in its ways. I start a business, and by observing other similar businesses I can nail down a business model that gives me reasonable confidence I will make reasonable money. I assume that when I drop into the social organism a relatively standardized droplet of information (advertising, sign over the door etc.), people will react.

Urban life allows figuring out a whole system of correlated coupling in human behaviour. I walk down the street and I can literally see it. Shop signs, traffic lights, cycling lanes, billboards, mobile apps pegged on digital beacons, certificates working as keys that open career doors: all that stuff is correlated coupling. Now, I want to connect the next dot: formation of new social roles. As I hinted the thing in ‘City slickers, or the illusion of standardized social roles’, I deeply, intuitively believe that social roles are much more individual, idiosyncratic ways of behaviour, rather than general categories. I think that I am much more the specific blogger than a general blogger.

Here I step in with another intuition of mine: technological change, such as we have been experiencing it over the period of time I can reasonably study, coincides with the multiplication of social roles. I am following two lines of logic. Firstly, the big generic technologies we have been developing allow growing individuation of social roles. Digital technologies are probably the most marked example thereof. Online content plays the role of semantic droplets, provoking more or less correlated coupling of behaviour in people located all over the planet. Personal electronics (smartphones, tablets, laptop computers) start working, for many of us, as some sort of super-cortex. Yes, that super-cortex can be slightly psychotic (e.g. comments on social media), and yes, it gives additional flexibility of behaviour (e.g. I can learn knew knowledge faster than before).

I frequently use patent applications as phenomenological manifestation of technological change. When I want to have my invention legally protected, I apply for a patent. Before the patent is granted, I need to file a patent application with the proper patent office. Then, I wait for said office to assess whether my invention is unique enough, and my patent application early enough to assume that I truly own truly novel a solution to an important problem. Patent applications are published, in case someone had two words to say about me having tapped into their earlier invention(s). At the aggregate level, the number of patent applications filed during a given window in time is informative about the number of workable and marketable inventions.

The World Bank, my favourite source of data (i.e. of numbers which allow me feeling confident that I have accurate knowledge about reality) provides two aggregates of patent applications: those filed by residents ( ), and those filed by non-residents ( ). A non-resident patent application is that filed by an entity located outside the jurisdiction of the corresponding patent office. I use these two aggregates, calculated for the world as a whole, as nominators, which I divide by the headcount of population, and thus I calculate coefficients of patent applications per 1 million people.

Of course, you could legitimately ask how the hell is it possible to have non-resident patent applications at the scale of the whole planet. Aliens? Not at all. When a Polish company applies for a patent to be granted in the territory of the United States, by the United States Patent and Trademark Office, it is a non-resident patent application, and it remains non-resident when summed up together with patent applications filed by the same Polish company with the European Patent Office. The global count of non-resident patent applications covers all the cases when applicant from country A files for a patent with the patent office of country B.

Good. I calculate the coefficients of patent applications per 1 million people on Earth, split into resident applications and the non-resident ones. The resulting trends in those two coefficients, calculated at the global scale, are shown in the graph below, entitled ‘Trends in the intensity of patentable invention’. Data regarding patent applications is available since 1985, thus its temporal window is shorter than that of urbanization-pertinent aggregates (since 1960). The general reading of the graph is that of an ascending trend. Our average million of people files for patenting more and more inventions. Their average million does the same.

However, as two separate trends are observed in, respectively, resident and non-resident patent applications per 1 million people, their ascent diverges in level and gradient. There have been systematically less non-resident patent applications that the resident ones. It indicates that the early marketing of science developed into tangible solutions takes place mostly in the domestic markets of the science in question. As for the gradients of change in both trends, they had been more or less similar until 2009 – 2010, and since then resident patent applications have taken off much more steeply. Domestic early marketing of developed science started to grow much more in intensity than the internationally played one. Interesting. Anyway, both trends are ascending in the presence of ascending urbanization and even more sharply growing density of urban population (see City slickers, or the illusion of standardized social roles’).   

Both trends in patent applications per 1 million people are ascending in the presence of ascending urbanization and even more sharply growing density of urban population (see City slickers, or the illusion of standardized social roles’). I want to test that apparent concurrence, and, by the same occasion, I want to do the kind of intellectual stunt I love, which consists in connecting human behaviour with directly to mathematics. I intend to use the logic of mathematical correlation, and more specifically of the so-called Pearson correlation, to explain and explore correlated coupling between growing urbanization and growing intensity of patenting, in the global human civilization.  

The first parachute I strap myself to, as I am preparing for that stunt, is the assumption that both urbanization and patenting are aggregate measures of human behaviour. The choice of living in a city is a pattern of behaviour, and spending 5 years in my garage, building that spaceship for going to Mars, and subsequently filing for a patent, is also a pattern of behaviour. My second parachute is statistical: I assume that change in behaviour is observable as deviation from an expected, average behaviour. I take a third parachute, by assuming that behaviours which I want to observe are numerically measurable as magnitudes on their respective scales. At this point, we enter the subtly foggy zone somewhere between individual behaviour and the collective one. The coefficient of urbanization, calculated as the percentage of humanity living in cities, is actually a measure of collective behaviour, indirectly informative about individual decisions. The same is true for all the other quantitative variables we commonly use in social sciences, including variables used in the present study, i.e. density of population in cities, and intensity of patenting. This is an important assumption of the swarm theory, when applied to social sciences: values observed in aggregate socio-economic variables represent cumulative outcomes of past human decisions.     

Correlated coupling means predictable change in the behaviour of social entity B, as social entity A does something. Mathematically, correlated coupling can be observed as concurrent deviations in behaviours A and B from their respective, average expected states. Good. Assumptions are made. Let’s dance on the edge between mathematics and human behaviour. (Local incidence of behaviour A – Average expected behaviour A) * (Local incidence of behaviour B – Average expected behaviour B) = Local covariance of behaviour A and behaviour B. That local covariance is the way two behaviours coincide. In my next step, I generalize: Sum (Local covariances of behaviour A and behaviour B) / Number of cases (Local coincidence of behaviour A and behaviour B) = General covariance of behaviour A and behaviour B.

Covariance of two behaviours is meaningful to the extent that it is compared with the individual, endogenous variance of each behaviour taken separately. For that reason, it is useful to denominate covariance with the arithmetical product of standard deviations observable in each of the two behaviours in question. Mathematically, it goes like: (Local incidence of behaviour A – Average expected behaviour A)2, i.e. square power of local deviation = local absolute variance in the phenomenon A. Elevating to square power serves to get rid of the minus sign, should the local incidence of behaviour A be smaller in magnitude than average expected behaviour A. Once again, I generalize: Sum (Local absolute variances in behaviour A) / Number of cases (Local incidence of behaviour A) = General variance in behaviour A.

Variance is the square power of something that really happened. Square powers are interesting, yet I want to get back to what really happened, and so I take the square root of variance: (General variance in behaviour A)0,5, i.e. square root of its general variance = standard deviation in behaviour A.

Covariance (Behaviour A <> Behaviour B) / (Standard deviation in behaviour A * Standard deviation in behaviour B) = Pearson correlation between behaviour A and B, commonly symbolised as ‘r’. It is mathematically impossible for the absolute value of r to go above 1. What we can reasonably expect from r is that it falls somewhere inside -1 ≤  r  ≤ 1. In statistics, it is assumed that – 0,3 <  r  < 0,3 is nothing to write home about, really. It is not a significant correlation. What we are interested in are the outliers of r, namely: -1 ≤ r ≤ -0,3 (significant negative correlation, behaviours A and B change in counter-step, in opposition to each other) and  0,3 ≤ r ≤ 1 (significant positive correlation, behaviours A and B fall nicely in step with each other).

Let’s have a look at the Pearson correlation between the metrics of urbanization, discussed in City slickers, or the illusion of standardized social roles’, and the coefficients of patent applications per 1 million people. Coefficients of correlation are shown in the table below, and they are really high. These are very strong, positive correlations. Covariances of patenting and urbanization mutually explain almost entirely their combined standard deviations. This is a very strong case of correlated coupling between behaviours that make cities, on the one hand, and behaviours that make iPhones and electric cars, on the other hand.  

Table – Pearson correlations between the metrics of urbanization and the coefficients of patent applications per 1 million people

 Non-resident patent applications per 1 million peopleResident patent applications per 1 million people
Density or urban population (people per 1 km2)0,970,93
Percentage of global population living in cities0,980,92

I sometimes use a metaphor about my own cognition. I say that I am three: a curious ape, a happy bulldog, and an austere monk. Now, my internal bulldog kickstarts, and bites deeper into that bone. There are strong positive correlations, and therefore there is correlated coupling of behaviours which those statistical correlations are informative about. Still, I want to know how exactly that correlated coupling happens. The graph below, entitled ‘Urbanization and intensity of patenting’ presents three coefficients: % of humanity in cities, density of urban population, and total number of patent applications (resident and non-resident together) per 1 million people. All three are driven to a common scale of measurement, by transforming them into constant-base indexes. For each of them, the value observed in the year 2000 stands for 1, i.e. any other value is divided by that value from 2000.    The index of urbanization (% of humanity living in cities), represented by the orange line marked with black triangles, is the flattest of the three. The blue line of indexed density in urban population is slightly steeper, and the red line, marked with green circles, representing indexed patent applications per 1 million people, is definitely the hastiest to ascend. Slight change in human decisions of moving to a city is coupled by correlation to human decisions to live in a progressively shrinking space in cities, and it looks like the former type of decision sort of amplifies the magnitude of change implied in the latter type of decision. That coupled correlation in collective behaviour, apparently working as a self-reinforcing loop, is further coupled by correlation with behaviours relative to developing science into something marketable, i.e. patenting.

Urbanization reinforces density of population in cities, through correlated coupling, and further reinforces the intensity of patenting. The density of population in cities grows faster than the percentage of humans living in cities because we need those cities to be kept in leash as for the territory they take. The more humans we are, the more food we need, whence the need for preserving the agricultural land that serves to make food. We invent more and more technologies, per 1 million people, that both allow those people to live in shrinking individual spaces, and allow people in the countryside to produce more and more food. Technological change that has been going on in our civilisation at least since 1960 is oriented on increasing urbanization of humanity, and more specifically on effective, rigid partition between urban areas and rural ones. That hypothesis seems to be holding.

Discover Social Sciences is a scientific blog, which I, Krzysztof Wasniewski, individually write and manage. If you enjoy the content I create, you can choose to support my work, with a symbolic $1, or whatever other amount you please, via MY PAYPAL ACCOUNT.  What you will contribute to will be almost exactly what you can read now. I have been blogging since 2017, and I think I have a pretty clearly rounded style.

In the bottom on the sidebar of the main page, you can access the archives of that blog, all the way back to August 2017. You can make yourself an idea how I work, what do I work on and how has my writing evolved. If you like social sciences served in this specific sauce, I will be grateful for your support to my research and writing.

‘Discover Social Sciences’ is a continuous endeavour and is mostly made of my personal energy and work. There are minor expenses, to cover the current costs of maintaining the website, or to collect data, yet I want to be honest: by supporting ‘Discover Social Sciences’, you will be mostly supporting my continuous stream of writing and online publishing. As you read through the stream of my updates on , you can see that I usually write 1 – 3 updates a week, and this is the pace of writing that you can expect from me.

Besides the continuous stream of writing which I provide to my readers, there are some more durable takeaways. One of them is an e-book which I published in 2017, ‘Capitalism And Political Power’. Normally, it is available with the publisher, the Scholar publishing house ( ). Via , you can download that e-book for free.

Another takeaway you can be interested in is ‘The Business Planning Calculator’, an Excel-based, simple tool for financial calculations needed when building a business plan.

Both the e-book and the calculator are available via links in the top right corner of the main page on .

[1] Xie, X. F., Zhang, W. J., & Yang, Z. L. (2002, May). Dissipative particle swarm optimization. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600) (Vol. 2, pp. 1456-1461). IEEE.

[2] Poli, R., Kennedy, J., & Blackwell, T. (2007). Particle swarm optimization. Swarm intelligence, 1(1), 33-57.

[3] Torres, S. (2012). Swarm theory applied to air traffic flow management. Procedia Computer Science, 12, 463-470.

[4] Stradner, J., Thenius, R., Zahadat, P., Hamann, H., Crailsheim, K., & Schmickl, T. (2013). Algorithmic requirements for swarm intelligence in differently coupled collective systems. Chaos, Solitons & Fractals, 50, 100-114.