Cautiously bon-vivant

I keep developing on a few topics in parallel, with a special focus on two of them. Lessons in economics and management which I can derive for my students, out of my personal experience as a small investor in the stock market, for one, and a broader, scientific work on the civilizational role of cities and our human collective intelligence, for two.

I like starting with the observation of real life, and I like ending with it as well. What I see around gives me the initial incentive to do research and makes the last pitch for testing my findings and intuitions. In my personal experience as investor, I have simply confirmed an initial intuition that giving a written, consistent and public account thereof helps me nailing down efficient strategies as an investor. As regards cities and collective intelligence, the first part of that topic comes from observing changes in urban life since COVID-19 broke out, and the second part is just a generalized, though mild an intellectual obsession, which I started developing once I observed the way artificial neural networks work.

In this update, I want to develop on two specific points, connected to those two paths of research and writing. As far as my investment is concerned, I am seriously entertaining the idea of broadening my investment portfolio in the sector of renewable energies, more specifically in the photovoltaic. I can notice a rush on the solar business in the U.S. I am thinking about investing in some of those shares. I already have, and have made a nice profit on the stock of First Solar (https://investor.firstsolar.com/home/default.aspx ) as well as on that of SMA Solar (https://www.sma.de/en/investor-relations/overview.html ). Currently, I am observing three other companies: Vivint Solar (https://investors.vivintsolar.com/company/investors/investors-overview/default.aspx ),  Canadian Solar (http://investors.canadiansolar.com/investor-relations ), and SolarEdge Technologies (https://investors.solaredge.com/investor-overview ). Below, I am placing the graphs of stock price over the last year, as regards those solar businesses. There is something like a common trend in those stock prices. March and April 2020 were a moment of brief jump upwards, which subsequently turned into a shy lie-down, and since the beginning of August 2020 another journey into the realm of investors’ keen interest seems to be on the way.

Before you have a look at the graphs, here is a summary table with selected financials, approached as relative gradients of change, or d(x).

 Change from 01/01/2020 to 31/08/2020
Companyd(market cap)d(assets)d(operational cash-flow)
First Solar+23,9%-6%Deeper negative: – $80 million
SMA Solar+27,5%-10%Deeper negative: -€40 million
Vivint Solar+362%+11%Deeper negative: – $9 million
SolarEdge+98%0+ $50 million
Canadian Solar+41%+4%+ $90 million

There are two fundamental traits of business models which I am having a close look at. Firstly, it is the correlation between changes in market capitalization, and changes in assets. I am checking if the solar businesses I want to invest in have their capital base functionally connected to the financial market. Looks a bit wobbly, as for now. Secondly, I look at current operational efficiency, measured with operational cash flow. Here, I can see there is still a lot to do. Here is the link to You Tube video with all that topic developed: Business models in renewable energies #3 Solar business and investment opportunities [Renew BM 3 2020-09-06 09-20-30 ; https://youtu.be/wYkW5KHQlDg ].

Those business models seem to be in a phase of slow stabilization. The industry as a whole seems to be slowly figuring out the right way of running that PV show, however the truly efficient scheme is still to be nailed down. Investment in those companies is based on reasonable trust in the growth of their market, and in the positive impact of technological innovation. Question: is it a good move to invest now? Answer: it is risky, but acceptably rational; once those business models become really efficient, the industry will be in or close to the phase of maturity, which, in turn, does not really allow expecting abnormally high return on investment.  

This is a very ‘financial’, hands-off approach to business models. In this case, business models of those photovoltaic businesses matter to me just to the extent of being fundamentally predictable. I don’t want to run a solar business, I just want to have elementary understanding of what’s going on, business-wise, to make my investment better grounded. Looking from inside a business, such an approach is informative about the way that a business model should ‘speak’ to investors.

At the end of the day, I think I am most likely to invest in SolarEdge. It seems to have all the LEGO blocks in place for a good opening. Good cash flow, although a bit sluggish when it comes to real investment.

As regards COVID-19 and cities, I am formulating the following hypothesis: COVID-19 has awakened some deeply rooted cultural patterns, which date back to the times of high epidemic risk, long before vaccines, sanitation and widespread basic healthcare. Those patterns involve less spatial mobility in the population, and social interactions within relatively steady social circles of knowingly healthy people. As a result, the overall frequency of social interactions in cities is likely to decrease, and, as a contingent result, the formation of new social roles is likely to slow down. Then, either digital technologies take over the function of direct social interactions and new social roles will be shaping themselves via your average smartphone, with all the apps it is blessed (haunted?) with, or the formation of new social roles will slow down in general. In that last case, we could have hard times with keeping up our pace of technological change. Here is the link to You Tube video which summarizes what is written below: Urban Economics and City Management #4 COVID and social mobility in cities [ Cities 4 2020-09-06 09-43-06 ; https://youtu.be/m3FZvsscw7A  ].

I want to gain some insight into the epidemiological angle of that claim, and I am passing in review some recent literature. I start with: Gatto, M., Bertuzzo, E., Mari, L., Miccoli, S., Carraro, L., Casagrandi, R., & Rinaldo, A. (2020). Spread and dynamics of the COVID-19 epidemic in Italy: Effects of emergency containment measures. Proceedings of the National Academy of Sciences, 117(19), 10484-10491 (https://www.pnas.org/content/pnas/117/19/10484.full.pdf ). As it is usually the case, my internal curious ape starts paying attention to details which could come as secondary for other people, and my internal happy bulldog follows along and bites deep into those details. The little detail in this specific paper is a parameter: the number of people quarantined as a percentage of those positively diagnosed with Sars-Cov-2. In the model developed by Gatto et al., that parameter is kept constant at 40%, which is, apparently, the average level empirically observed in Italy during the Spring 2020 outbreak. Quarantine is strict isolation between carriers and (supposedly) non-carriers of the virus. Quarantine can be placed on the same scale as basic social distancing. It is just stricter, and, in quantitative terms, it drives much lower the likelihood of infectious social interaction. Gatto el al. insist that testing effort and quarantining are essential components of collective defence against the epidemic. I generalize: testing and quarantine are patterns of collective behaviour. I check whether people around me are carriers or not, and then I split them into two categories: those whom I strongly suspect to host and transmit Sars-Cov-2, and all the rest. I define two patterns of social interaction with those two groups: very restrictive with the former, and cautiously bon vivant with the others (still, no hugging). As the technologies of testing will be inevitably diffusing across the social landscape, that structured pattern is likely to spread as well.    

Now, I pay a short intellectual visit to Jiang, P., Fu, X., Van Fan, Y., Klemeš, J. J., Chen, P., Ma, S., & Zhang, W. (2020). Spatial-temporal potential exposure risk analytics and urban sustainability impacts related to COVID-19 mitigation: A perspective from car mobility behaviour. Journal of Cleaner Production, 123673 https://doi.org/10.1016/j.jclepro.2020.123673 . Their methodology is based on correlating spatial mobility of cars in residential areas of Singapore with the risk of infection with COVID-19. A 44,3% ÷ 55,4% decrease in the spatial mobility of cars is correlated with a 72% decrease in the risk of social transmission of the virus. I intuitively translate it into geometrical patterns. Lower mobility in cars means a shorter average radius of travel by the means of available urban transportation. In the presence of epidemic risk, people move across a smaller average territory.

In another paper (or rather in a commented dataset), namely in Pepe, E., Bajardi, P., Gauvin, L., Privitera, F., Lake, B., Cattuto, C., & Tizzoni, M. (2020). COVID-19 outbreak response, a dataset to assess mobility changes in Italy following national lockdown. Scientific data, 7(1), 1-7. https://www.nature.com/articles/s41597-020-00575-2.pdf?origin=ppub , I find an enlarged catalogue of metrics pertinent to spatial mobility. That paper, in turn, lead me to the functionality run by Google: https://www.google.com/covid19/mobility/ . I went through all of it a bit cursorily, and I noticed two things. First of all, countries are strongly idiosyncratic in their social response to the pandemic. Still, and second of all, there are common denominators across idiosyncrasies and the most visible one is cyclicality. Each society seems to have been experimenting with the spatial mobility they can afford and sustain in the presence of epidemic risk. There is a cycle experimentation, around 3 – 4 weeks. Experimentation means learning and learning usually leads to durable behavioural change. In other words, we (I mean, homo sapiens) are currently learning, with the pandemic, new ways of being together, and those ways are likely to incrust themselves into our social structures.    

The article by Kraemer, M. U., Yang, C. H., Gutierrez, B., Wu, C. H., Klein, B., Pigott, D. M., … & Brownstein, J. S. (2020). The effect of human mobility and control measures on the COVID-19 epidemic in China. Science, 368(6490), 493-497 (https://science.sciencemag.org/content/368/6490/493 ) shows that without any restrictions in place, the spatial distribution of COVID-19 cases is strongly correlated with spatial mobility of people. With restrictions in place, that correlation can be curbed, however it is impossible to drive down to zero. In plain human, it means that even as stringent lockdowns as we could see in China cannot reduce spatial mobility to a level which would completely prevent the spread of the virus. 

By the way, in Gao, S., Rao, J., Kang, Y., Liang, Y., & Kruse, J. (2020). Mapping county-level mobility pattern changes in the United States in response to COVID-19. SIGSPATIAL Special, 12(1), 16-26 (https://arxiv.org/pdf/2004.04544.pdf ), I read that the whole idea of tracking spatial mobility with people’s personal smartphones largely backfired because the GDS transponders, installed in the average phone, have around 20 metres of horizontal error, on average, and are easily blurred when people gather in one place. Still, whilst the idea went down the drain as regards individual tracking of mobility, smartphone data seems to provide reliable data for observing entire clusters of people, and the way those clusters flow across space. You can consult Jia, J. S., Lu, X., Yuan, Y., Xu, G., Jia, J., & Christakis, N. A. (2020). Population flow drives spatio-temporal distribution of COVID-19 in China. Nature, 1-5.  (https://www.nature.com/articles/s41586-020-2284-y?sf233344559=1) .

Bonaccorsi, G., Pierri, F., Cinelli, M., Flori, A., Galeazzi, A., Porcelli, F., … & Pammolli, F. (2020). Economic and social consequences of human mobility restrictions under COVID-19. Proceedings of the National Academy of Sciences, 117(27), 15530-15535 (https://www.pnas.org/content/pnas/117/27/15530.full.pdf ) show an interesting economic aspect of the pandemic. Restrictions in mobility give the strongest economic blow to the poorest people and to local communities marked by relatively the greatest economic inequalities. Restrictions imposed by governments are one thing, and self-imposed limitations in spatial mobility are another. If my intuition is correct, namely that we will be spontaneously modifying and generally limiting our social interactions, in order to protect ourselves from COVID-19, those changes are likely to be the fastest and the deepest in high-income, low-inequality communities. As income decreases and inequality rises, those adaptive behavioural modifications are likely to weaken.

As I am drawing a provisional bottom line under that handful of scientific papers, my initial hypothesis seems to hold. We do modify, as a species, our social patterns, towards more encapsulated social circles. There is a process of learning taking place, and there is no mistake about it. That process of learning involves a downwards recalibration in the average territory of activity, and smart selection of people whom we hang out with, based on what we know about the epidemic risk they convey. This is a process of learning by trial and error, and it is locally idiosyncratic. Idiosyncrasies seem to be somehow correlated with differences in wealth. Income and accumulated capital visibly give local communities an additional edge in the adaptive learning. On the long run, economic resilience seems to be a key factor in successful adaptation to epidemic risk.

Just to end up with, here you have an educational piece as regards Business models in the Media Industry #4 The gaming business[ Media BM 4 2020-09-02 10-42-44; https://youtu.be/KCzCicDE8pc]. I study the case of CD Projekt (https://www.cdprojekt.com/en/investors/ ), a Polish gaming company, known mostly for ‘The Witcher’ game and currently working on the next one, Cyberpunk, with Keanu Reeves giving his face to the hero. I discover a strange business model, which obviously has hard times to connect with the creative process at the operational level. As strange as it might seem, the main investment activity, for the moment, consists in terminating and initiating cash bank deposits (!), and one of the most important operational activities is to push further in time the moment of officially charging customers with some economically due receivables. On the top of all that, those revenues deferred into the future are officially written in the balance sheet as short-term liabilities, which CD Projekt owes to…whom exactly?   

Black Swans happen all the time

MY EDITORIAL ON YOU TUBE

I continue with the topic of Artificial Intelligence used as a tool to study collective intelligence in human social structures. In scientific dissertations, the first question, to sort of answer right off the bat, is: ‘Why should anyone bother?’. What is the point of adding one more conceptual sub-repertoire, i.e. that of collective intelligence, to the already abundant toolbox of social sciences? I can give two answers. Firstly, and most importantly, we just can do it. We have Artificial Intelligence, and artificial neural networks are already used in social sciences as tools for optimizing models. From there, it is just one more step to use the same networks as tools for simulation: they can show how specifically a given intelligent adaptation is being developed. This first part of the answer leads to the second one, namely to the scientific value added of such an approach. My essential goal is to explore the meaning, the power, and the value of collective intelligent adaptation as such, and artificial neural networks seem to be useful instruments to that purpose.

We live and we learn. We learn in two different ways: by experimental trial and error, on the one hand, and by cultural recombination of knowledge. The latter means more than just transmission of formalized cultural content: we can collectively learn as we communicate to each other what we know and as we recombine those individual pieces of knowledge. Quite a few times already, I have crossed my intellectual paths with the ‘Black Swan Theory’ by Nassim Nicholas Taleb, and its central claim that we collectively tend to silence information about sudden, unexpected events which escape the rules of normality – the Black Swans – and yet our social structures are very significantly, maybe even predominantly shaped by those unusual events. This is very close to my standpoint. I claim that we, humans, need to find a balance between chaos and order in our existence. Most of our culture is order, though, and this is pertinent to social sciences as well. Still, it is really interesting to see – and possibly experiment with – the way our culture deals with the unpredictable and extraordinary kind of s**t, sort of when history is really happening out there.

I have already had a go at something like a black swan, using a neural network, which I described in The perfectly dumb, smart social structure. The thing I discovered when experimenting with that piece of AI is that black swans are black just superficially, sort of. At the deepest, mathematical level of reality, roughly at the same pub where Pierre Simon Laplace plays his usual poker game, unexpectedness of events is a property of human cognition, and not that of reality as such. The relatively new Interface Theory of Perception (Hoffman et al. 2015[1]; Fields et al. 2018[2]; see also I followed my suspects home) supplies interesting insights in this respect. States of the world are what they are, quite simply. No single state of the world is more expected than others, per se. We expect something to happen, or we don’t although we should. My interpretation of the Nassim Nicholas Taleb’s theory is that Black Swans appear when we have a collective tendency to sort of over-smooth a given chunk of our experience and we collectively commit not to give a f**k about some strange outliers, which sort of should jump to the eye but we cognitively arrange so as they don’t really. Cognitively, Black Swans are qualia rather than phenomena as such.

Another little piece of knowledge I feel like contributing to the theory of Black Swan is that collective intelligence of human societies – or culture, quite simply – is compound and heterogenous. What is unexpected to some people is perfectly normal to others. This is how professional traders make money in financial markets: they are good at spotting recurrence in phenomena which look like perfect Black Swans to the non-initiated market players.

In the branch of philosophy called ‘praxeology’, there is a principle which states that the shortest path to a goal is the most efficient path, which is supposed to reflect the basics of Newtonian physics: the shortest path consumes the least amount of energy. Still, just as Newtonian physics are being questioned by their modern cousins, such as quantum physics, that classical approach of praxeology is being questioned by modern social sciences. I was born in the communist Poland, in 1968, and I spent the first 13 years of my life there. I know by heart the Marxist logic of the shortest path. You want people to be equal? Force them to be equal. You want to use resources in the most efficient way? Good, make a centralized, country-wide plan for all kinds of business, and you know what, make it five-year long. The shortest, the most efficient path, right? Right, there was only one thing: it didn’t work. Today, we have a concept to explain it: hyper-coordination. When a big organization focuses on implementing one, ‘perfect’ plan, people tend to neglect many opportunities to experiment with little things, sort of sidekicks regarding the main thread of the plan. Such neglect has a high price, for a large number of what initially looks like haphazard disturbances is valuable innovation. Once put aside, those ideas seldom come back, and they turn into lost opportunities. In economic theory, lost opportunities have a metric attached. It is called opportunity cost. Lots of lost opportunities means a whole stockpile of opportunity cost, which, in turn, takes revenge later on, in the form of money that we don’t earn on the technologies we haven’t implemented. Translated into present day’s challenges, lost ideas can kick our ass as lost chances to tackle a pandemic, or to adapt to climate change.

The shortest path to a goal is efficient under the condition that we know the goal. In long-range strategies, we frequently don’t know it, and then adaptative change is the name of the game. Here come artificial neural networks, once again. At the first sight, if we assume learning by trial and error and who knows where exactly we are heading, we tend to infer that we don’t know at all. Still, observing neural networks with their sleeves up and doing computational work teaches an important lesson: learning by trial and error follows clear patterns and pathways, and so does adaptative change. Learning means putting order in the inherent chaos of reality. Probably the most essential principle of that order is that error is information, and, should it be used for learning, it needs to be memorized, remembered, and processed.

Building a method of adaptative learning is just as valuable as, and complementary to preparing a plan with clearly cut goals. Goals are cognitive constructs which we make to put some order in the chaos of reality. These constructs are valuable tools for guiding our actions, yet they are in loop with our experience. We stumble upon Black Swans more frequently than we think. We just learn how to incorporate them into our cognition. I have experienced, in my investment strategy, the value and the power of consistent, relentless reformulation and re-description of both my strategic goals and of my experience.

How does our culture store information about events which we could label as errors? If I want to answer that question, I need to ask and answer another one: how do we collectively know that we have made a collective error, which can possibly be used as material for collective learning? I stress very strongly the different grammatical transformations of the word ‘collective’. A single person can know something, by storing information, residual from sensory experience, in the synapses of the brain. An event can be labelled as error, in the brain, when it yields an outcome non-conform to the desired (expected) one. Of course, at this point, a whole field of scientific research emerges, namely that of cognitive sciences. Still, we have research techniques to study that stuff. On the other hand, a collective has no single brain, as a distinct information processing unit. A collective cannot know things in the same way an individual does.

Recognition of error is a combination of panic in front of chaos, on the one hand, and objective measurement of the gap between reality and expected outcomes.  Let’s illustrate it with an example. When I am writing these words, it is July 12th, 2020, and it is electoral day: we are having, in Poland, the second-round ballot in presidential elections. As second rounds normally play out, there are just two candidates, the first two past-the-post in the first-round ballot. Judging by the polls, and by the arithmetic of transfer from the first round, it is going to be a close shave. In a country of about 18 million voters, and with an expected electoral attendance over 50%, the next 5 years of presidency is likely to be decided by around 0,5% of votes cast, roughly 40 ÷ 50 thousand people. Whatever the outcome of the ballot, there will be roughly 50% of the population claiming that our country is on the right track, and another 50% or so pulling their hair out and screaming that we are heading towards a precipice. Is there any error to make collectively, in this specific situation? If so, who and how will know whether the error really occurred, what was its magnitude and how to process the corresponding information?

Observation of neural networks at work provides some insights in that respect. First of all, in order to assess error, we need a gap between the desired outcome and the state of reality such as it is. We can collectively assume that something went wrong if we have a collective take on what would be the perfect state of things. What if the desired outcome is an internally conflicted duality, as it is the case of the Polish presidential elections 2020? Still, that collectively desired outcome could be something else that just the victory of one candidate. Maybe the electoral attendance? Maybe the fact of having elections at all? Whatever it is that we are collectively after, we learn by making errors at nailing down that specific value.

Thus, what are we collectively after? Once again, what is the point of discovering anything in respect to presidential elections? Politics are functional when they help uniting people, and yet some of the most efficient political strategies are those which use division rather than unity. Divide et impera, isn’t it? How to build social cooperation at the ground level, when higher echelons in the political system love playing the poker of social dissent? Understanding ourselves seems to be the key.    

Once again, neural networks suggest two alternative pathways for discovering it, depending on the amount of data we have regarding our own social structure. If we have acceptably abundant and reliable data, we can approach the thing straightforwardly, and test all the variables we have as the possible output ones in the neural network supposed to represent the way our society works. Variables which, when pegged as output ones in the network, allow the neural network to produce datasets very similar to the original one, are probably informative about the real values pursued by the given society. This is the approach I have already discussed a few times on my blog. You can find a scientific example of its application in my paper on energy efficiency.

There is another interesting way of approaching the same issue, and this one is much more empiricist, as it forces to discover more from scratch. We start with the simple observation that things change. When they change a lot, and we can measure change on some kind of quantitative scale, we call it variance. There is a special angle of approach to variance, when we observe it over time. Observable behavioural change – or variance at different levels of behavioural patterns – includes a component of propagated error. How? Let’s break it down.

When I change my behaviour in a non-aleatory way, i.e. when my behavioural change makes at least some sense, anyone can safely assume that I made the change for a reason. I changed my behaviour because my experience tells me that I should. I recognized something I f**ked up or some kind of frustration with the outcomes of my actions, and I change. I have somehow incorporated information about past error into my present behaviour, whence the logical equivalence: Variance in behaviour = Residual behaviour + Adaptive change after the recognition of error + Aleatory component.

Discover Social Sciences is a scientific blog, which I, Krzysztof Wasniewski, individually write and manage. If you enjoy the content I create, you can choose to support my work, with a symbolic $1, or whatever other amount you please, via MY PAYPAL ACCOUNT.  What you will contribute to will be almost exactly what you can read now. I have been blogging since 2017, and I think I have a pretty clearly rounded style.

In the bottom on the sidebar of the main page, you can access the archives of that blog, all the way back to August 2017. You can make yourself an idea how I work, what do I work on and how has my writing evolved. If you like social sciences served in this specific sauce, I will be grateful for your support to my research and writing.

‘Discover Social Sciences’ is a continuous endeavour and is mostly made of my personal energy and work. There are minor expenses, to cover the current costs of maintaining the website, or to collect data, yet I want to be honest: by supporting ‘Discover Social Sciences’, you will be mostly supporting my continuous stream of writing and online publishing. As you read through the stream of my updates on https://discoversocialsciences.com , you can see that I usually write 1 – 3 updates a week, and this is the pace of writing that you can expect from me.

Besides the continuous stream of writing which I provide to my readers, there are some more durable takeaways. One of them is an e-book which I published in 2017, ‘Capitalism And Political Power’. Normally, it is available with the publisher, the Scholar publishing house (https://scholar.com.pl/en/economics/1703-capitalism-and-political-power.html?search_query=Wasniewski&results=2 ). Via https://discoversocialsciences.com , you can download that e-book for free.

Another takeaway you can be interested in is ‘The Business Planning Calculator’, an Excel-based, simple tool for financial calculations needed when building a business plan.

Both the e-book and the calculator are available via links in the top right corner of the main page on https://discoversocialsciences.com .

You might be interested Virtual Summer Camps, as well. These are free, half-day summer camps will be a week-long, with enrichment-based classes in subjects like foreign languages, chess, theatre, coding, Minecraft, how to be a detective, photography and more. These live, interactive classes will be taught by expert instructors vetted through Varsity Tutors’ platform. We already have 200 camps scheduled for the summer.   https://www.varsitytutors.com/virtual-summer-camps


[1] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[2] Fields, C., Hoffman, D. D., Prakash, C., & Singh, M. (2018). Conscious agent networks: Formal analysis and application to cognition. Cognitive Systems Research, 47, 186-213. https://doi.org/10.1016/j.cogsys.2017.10.003

Cruel and fatalistic? Weelll, not necessarily.

MY EDITORIAL ON YOU TUBE

I am developing on one particular thread in my research, somehow congruent with the research on the role of cities, namely the phenomenon of collective intelligence and the prospects for using artificial intelligence to study human social structures. I am going both for good teaching material and for valuable scientific insight.

In social sciences, we face sort of an embarrassing question, which nevertheless is a fundamental one, namely how should we interpret quantitative data about societies. Simple but puzzling: are those numbers a meaningful representation of collectively pursued desired outcomes, or should we view them as largely random, temporary a representation of something going on at a deeper, essentially unobserved level?

I guess I can use artificial neural networks to try and solve that puzzle, at least to some extent. like starting with empirics, or, in plain human, with facts which I have observed so far. My most general observation, pertinent to every single instance of me meddling with artificial neural networks is that they are intelligent structures. I ground this general claim in two specific observations. Firstly, a neural network can experiment with itself, and come up with meaningful outcomes of experimentation, whilst keeping structural stability. In other words, an artificial neural network can change a part of itself whilst staying the same in its logical frame. Secondly, when I make an artificial neural network observe its own internal coherence, that observation changes the behaviour of the network. For me, that capacity to do meaningful and functional introspection is an important sign of intelligence.

This intellectual standpoint, where artificial neural networks are assumed to be intelligent structures, I pass to the question what kind of intelligence those networks can possibly represent. At this point I assume that human social structures are intelligent, too, as they can experiment with themselves (to some extent) whilst keeping structural stability, and they can functionally observe their own internal coherence and learn therefrom. Those two intelligent properties of human social structures are what we commonly call culture.

As I put those two intelligences – that of artificial neural networks and that of human social structures – back to back, I arrive at a new definition of culture. Instead of defining culture as a structured collection of symbolic representations, I define it as collective intelligence of human societies, which, depending on its exact local characteristics, endows those societies with a given flexibility and capacity to change, through a given capacity for collective experimentation.      

Once again, these are my empirical observations, the most general ones regarding the topic at hand. Empirically, I can observe that both artificial neural networks and human social structures can experiment with themselves in the view of optimizing something, whilst maintaining structural stability, and yet that capacity to experiment with itself has limits. Both a neural network and a human society can either stop experimenting or go haywire when experimentation leads to excessively low internal coherence of the system. Thence the idea of using artificial neural networks to represent the way that human social structures experiment with themselves, i.e. the way we are collectively intelligent. When we think about our civilisation, we intuitively ask what’s the endgame, seen from the present moment. Where are we going? That’s a delicate question, and, according to historians such as Arnold Toynbee, this is essentially a pointless one. Civilisations develop and degenerate, and supplant each other, in multi-secular cycles of apparently some 2500 – 3500 years each. If I ask the question ‘How can our civilisation survive, e.g. how can we survive climate change?’, the most rationally grounded answer is ‘Our civilisation will almost certainly fade away and die out, and then a new civilisation will emerge, and climate change could be as good an excuse as anything else to do that transition’. Cruel and fatalistic? Weelll, not necessarily. Think and ask yourself: would you like to stay the same forever? Probably not. The only way to change is to get out of our comfort zone, and the same is true for civilisations. The death of civilisations is different from extinction: when a civilisation dies, its culture transforms radically, i.e. its intelligent structure changes, yet the human population essentially survives.        

Social sciences are sciences because they focus on the ‘how?’ more than on the ‘why?’. The ‘why?’ implies there is a reason for everything, thus some kind of ultimate goal. The ‘how?’ dispenses with those considerations. The personal future of each individual human is almost entirely connected to the ‘how?’ of civilizational change and virtually completely disconnected from the ‘why?’. Civilisations change at the pace of centuries, and this is a slow pace. Even a person who lives for 100 years can see only a glimpse of human history. Yes, our individual existences are incredibly rich in personal experience, and we can use that existential wealth to make our own lives better, and to give a touch of betterment to the lives of incoming humans (i.e. our kids), and yet our personal change is very different from civilizational change. I will even go as far as claiming that individual human existence, with all its twists and turns, usually takes place inside one single cultural pattern, therefore inside a given civilisation. There are just a few human generations in the history of mankind, whose individual existences happened at the overlapping between a receding civilization and an emerging one.

On the night of July 6th, 2020, I had that strange dream, which I believe could be important in the teaching of social sciences. I dreamt of being pursued by some not quite descript ‘them’, in a slightly gangster fashion. I knew they had guns. I procured a gun for myself by breaking its previous owner neck by surprise. Yes, it is shocking, but it was just the beginning. I was running away from those people who wanted to get me. I was running through something like an urban neighbourhood, slightly like Venice, Italy, with a lot of canals all over the place. As I was running, I was pushing people into those canals, just to have freeway and keep running. I shot a few people dead, when they tried to get hold of me. All the time, I was experiencing intense, nagging fear. I woke up from that dream, shortly after midnight, and that intense fear was still resonating in me. After a few minutes of being awake, and whilst still being awake, I experienced another intense frame of mind, like a realization: me in that dream, doing horrible things when running away from people about whom I think they could try to hurt me, it was a metaphor of quite a long window in my so-far existence. Many a time I would just rush forward and do things I am still ashamed of today, and, when I meditate about it, I was doing it out of that irrational fear that other people could do me harm when they sort of catch on. When this realization popped in my mind, I immediately calmed down, and it was deep serenity, as if a lot of my deeply hidden fears had suddenly evaporated.

Fear is a learnt response to environmental factors. Recently, I have been discovering, and I keep discovering something new about fear: its fundamentally irrational nature. All of my early life, I have been taught that when I am afraid of something, I probably have good reasons to. Still, over the last 3 years, I have been practicing intermittent fasting (combined with a largely paleo-like diet), just to get out of a pre-diabetic state. Month after month, I was extending that window of fasting, and now I am at around 17 – 18 hours out of 24. A little bit more than one month ago, I decided to jump over another hurdle, i.e. that of fasted training. I started doing my strength training when fasting, early in the morning. The first few times, my body was literally shaking with fear. My muscles were screaming: ‘Noo! We don’t want effort without food!’. Still, I gently pushed myself, taking good care of staying in my zone of proximal development, and already after a few days, all changed. My body started craving for those fasted workouts, as if I was experiencing some strange energy inside of me. Something that initially had looked like a deeply organic and hence 100% justified a fear, turned out to be another piece of deeply ingrained bullshit, which I removed safely and fruitfully.

My generalisation on that personal experience is a broad question: how much of that deeply ingrained bullshit, i.e. completely irrational and yet very strong beliefs do we carry inside our body, like literally inside our body? How much memories, good and bad, do we have stored in our muscles, in our sub-cortical neural circuitry, in our guts and endocrine glands? It is fascinating to discover what we can change in our existence when we remove those useless protocols.

So far, I have used artificial neural networks in two meaningful ways, i.e. meaningful from the point of view of what I know about social sciences. It is generally useful to discover what we, humans, are after. I can use a dataset of common socio-economic stats, and test each of them as the desired outcome of an artificial neural network. Those stats have a strange property: some of them come as much more likely desired outcomes than others. A neural network oriented on optimizing those ‘special’ ones is much more similar to the original data than networks pegged on other variables. It is also useful to predict human behaviour. I figured out a trick to make such predictions: I define patterns of behaviour (social roles or parts thereof), and I make a neural network which simulates the probability that each of those patterns happens.

One avenue consists in discovering a hierarchy of importance in a set of socio-economic variables, i.e. in common stats available from external sources. In this specific approach, I treat empirical datasets of those stats as manifestation of the corresponding state spaces. I assume that the empirical dataset at hand describes one possible state among many. Let me illustrate it with an example: I take a big dataset such as Penn Tables. I assume that the set of observations yielded by the 160ish countries in the database, observed since 1964, is like a complex scenario. It is one scenario among many possible. This specific scenario has played out the way it has due to a complex occurrence of events. Yet, other scenarios are possible.      

To put it simply, datasets made of those typical stats have a strange property, possible to demonstrate by using a neural network: some variables seem to reflect social outcomes of particular interest for the society observed. A neural network pegged on those specific variables as output ones produces very little residual error, and, consequently, stays very similar to the original dataset, as compared to networks pegged on other variables therein.

Under this angle of approach, I ascribe an ontological interpretation to the stats I work with: I assume that each distinct socio-economic variable informs about a distinct phenomenon. Mind you, it is just one possible interpretation. Another one, almost the opposite, claims that all the socio-economic stats we commonly use are essentially facets (or dimensions) of the same, big, compound phenomenon called social existence of humans. Long story short, when I ascribe ontological autonomy to different socio-economic stats, I can use a neural network to establish two hierarchies among these variables: one hierarchy is that of value in desired social outcomes, and another one of epistatic role played by individual variables in the process of achieving those outcomes. In other words, I can assess what the given society is after, and what are the key leverages being moved so as to achieve the outcome pursued.

Another promising avenue of research, which I started exploring quite recently, is that of using an artificial neural network as a complex set of probabilities. Those among you, my readers, who are at least mildly familiar with the mechanics of artificial neural networks, know that a neural network needs empirical data to be transformed in a specific way, called standardization. The most common way of standardizing consists in translating whatever numbers I have at the start into a scale of relative size between 0 and 1, where 1 corresponds to the local maximum. I thought that such a strict decimal fraction comprised between 0 and 1 can spell ‘probability’, i.e. the probability of something happening. This line of logic applies to just some among the indefinitely many datasets we can make. If I have a dataset made of variables such as, for example, GDP per capita, healthcare expenditures per capita, and the average age which a person ends their formal education at, it cannot be really considered in terms of probability. If there is any healthcare system in place, there are always some healthcare expenditures per capita, and their standardized value cannot be really interpreted as the probability of healthcare spending taking place. Still, I can approach the same under a different angle. The average healthcare spending per capita can be decomposed into a finite number of distinct social entities, e.g. individuals, local communities etc., and each of those social entities can be associated with a probability of using any healthcare at all during a given period of time.

That other approach to using neural networks, i.e. as sets of probabilities, has some special edge to it. I can simulate things happening or not, and I can introduce a disturbing factor, which kicks certain pre-defined events into existence or out of it. I have observed that once a phenomenon becomes probable, it is not really possible to kick it out of the system, yet it can yield to newly emerging phenomena. In other words, my empirical observation is that once a given structure of reality is in place, with distinct phenomena happening in it, that structure remains essentially there, and it doesn’t fade even if probabilities attached to those phenomena are random. On the other hand, when I allow a new structure, i.e. another set of distinct phenomena, to come into existence with random probabilities, that new structure will slowly take over a part of the space previously occupied just by the initially incumbent, ‘old’ set of phenomena. All in all, when I treat standardized numerical values – which an artificial neural network normally feeds on – as probabilities of happening rather than magnitudes of something existing anyway, I can simulate the unfolding of entire new structures. This is a structure generating other structures.

I am trying to reverse engineer that phenomenon. Why do I use at all numerical values standardized between 0 and 1, in my neural network? Because this is the interval (type) of values that the function of neural activation needs. I mean there are some functions, such as the hyperbolic tangent, which can work with input variables standardized between – 1 and 1, yet if I want my data to be fully digest for any neural activation function, I’d better standardize it between 0 and 1. Logically, I infer that mathematical functions useful for simulating neural activation are mathematically adapted to deal with sets of probabilities (range between 0 and 1) rather than sets of local magnitudes.    

Discover Social Sciences is a scientific blog, which I, Krzysztof Wasniewski, individually write and manage. If you enjoy the content I create, you can choose to support my work, with a symbolic $1, or whatever other amount you please, via MY PAYPAL ACCOUNT.  What you will contribute to will be almost exactly what you can read now. I have been blogging since 2017, and I think I have a pretty clearly rounded style.

In the bottom on the sidebar of the main page, you can access the archives of that blog, all the way back to August 2017. You can make yourself an idea how I work, what do I work on and how has my writing evolved. If you like social sciences served in this specific sauce, I will be grateful for your support to my research and writing.

‘Discover Social Sciences’ is a continuous endeavour and is mostly made of my personal energy and work. There are minor expenses, to cover the current costs of maintaining the website, or to collect data, yet I want to be honest: by supporting ‘Discover Social Sciences’, you will be mostly supporting my continuous stream of writing and online publishing. As you read through the stream of my updates on https://discoversocialsciences.com , you can see that I usually write 1 – 3 updates a week, and this is the pace of writing that you can expect from me.

Besides the continuous stream of writing which I provide to my readers, there are some more durable takeaways. One of them is an e-book which I published in 2017, ‘Capitalism And Political Power’. Normally, it is available with the publisher, the Scholar publishing house (https://scholar.com.pl/en/economics/1703-capitalism-and-political-power.html?search_query=Wasniewski&results=2 ). Via https://discoversocialsciences.com , you can download that e-book for free.

Another takeaway you can be interested in is ‘The Business Planning Calculator’, an Excel-based, simple tool for financial calculations needed when building a business plan.

Both the e-book and the calculator are available via links in the top right corner of the main page on https://discoversocialsciences.com .

You might be interested Virtual Summer Camps, as well. These are free, half-day summer camps will be a week-long, with enrichment-based classes in subjects like foreign languages, chess, theatre, coding, Minecraft, how to be a detective, photography and more. These live, interactive classes will be taught by expert instructors vetted through Varsity Tutors’ platform. We already have 200 camps scheduled for the summer.   https://www.varsitytutors.com/virtual-summer-camps

What can be wanted only at the collective level

MY EDITORIAL ON YOU TUBE

I am recapitulating on my research regarding cities and their role in our civilization. In the same time, I start preparing educational material for the next semester of teaching, at the university. I am testing somehow new a format, where I precisely try to put science and teaching content literally side by side. The video editorial on You Tube plays an important part here, and I sincerely invite all my readers to watch it.  

I am telling the story of cities once again, from the beginning. Beginning of March 2020. In Poland, we are going into the COVID-19 lockdown. I am cycling through the virtually empty streets of Krakow, my hometown. I slowly digest the deep feeling of weirdness: the last time I saw the city that inanimate, it was during some particularly tense moments in the times of communism, decades ago. A strange question keeps floating on the surface of my consciousness: ‘How many human footsteps per day does this place need to be truly alive?’.

Cities are demographic anomalies. This is particularly visible from space, when satellite imagery serves to distinguish urban areas from rural ones. Cities are abnormally dense agglomerations of man-made architectural structures, paired with just abnormally dense clusters of night-time lights. We, humans, we agglomerate in cities. We purposefully reduce the average social distance, and just as purposefully increase the intensity of our social interactions. Why and how do we do that? The ‘why?’ is an abyssal question. If I attempt to answer it with all the intellectual rigor possible, it is almost impossible to answer. Still, there is hope. I have that little theory of mine – well, not just mine, it is called ‘contextual ethics’ – namely that we truly value the real outcomes we get. In other words, we really want the things which we actually get at the end of the day. This could be a slippery slope. Did Londoners want to have the epidemic of plague, in 1664? I can cautiously say it wasn’t on the top list of their wildest dreams. Yet, acquiring herd immunity and figuring out ways of containing an epidemic outbreak: well, that could be a valuable outcome in the long perspective. That outcome has a peculiar trait: it sort of can be wanted only at the collective level, since it is a collective outcome par excellence. If we pursue an outcome like this one, we are being collectively intelligent. It would be somehow adventurous to try and acquire herd immunity singlehandedly. 

Cities manifest one of the ways we are collectively intelligent. In cities, we get individual outcomes, and collective ones, sort of in layers. Let’s take a simple pattern of behaviour: imitation and personal style. We tend to imitate each other, and frequently, as we are doing so, we love pretending we are reaching the peak or originality. Both imitation and pretention to originality make any sense only when there are other people around, and the more people are there around, the more meaningful it is. Imagine you have a ranch in Texas, like 200 hectares, and in order to imitate anyone, or to pretend being original, you need to drive for 2 hours one way, and then 2 hours back, and, at the end of the day, you have interacted with maybe 20 people.

Our human social structures are machines which make other social structures, and not only sustain the current humans inside. A lot of behavioural patterns make any sense at all when the density of population reaches a reasonably required minimum. Social interactions produce and convey information which our brains use to form new patterns. As I think about it, my take on collective intelligence opens up onto the following claim: we have cities in order to make some social order for the future, and order made of social roles and group identities. We have a given sharpness of social distinction between cities and the countryside, e.g. in terms of density in population, in order to create some social roles and group identities for the future.

We, humans, had discovered – although we might not be aware of what we discovered – that certain types of social interactions (not all of them) can be made into recurrent patterns, and those patterns have the capacity to make new patterns. As long as I just date someone, it is temporary interaction. When I propose, it takes some colours: engagement can turn into marriage (well, it should, technically), thus one pattern of interaction can produce another pattern. When I marry a woman, it opens up a whole plethora of new interactions: parenthood, agreement as for financials (prenuptial contracts or the absence thereof), in-law family relations (parents-in-law, siblings-in-law). Have you noticed that some of the greatest financial fortunes, over centuries, had been accumulated inside family lineages? See? We hit the right pattern of social interactions, and from there we can derive either new copies of the same structure or altogether new structures.

Blast! I have just realized I finally nailed down something which I have been turning around in my mind for months: the logical link between human social structures and artificial neural networks. I use artificial neural networks to simulate collective intelligence in human societies, and I have found one theoretical assumption which I need to put in such a model, namely that consecutive states of society must form a Markov chain, i.e. each individual state must be possible to derive entirely from the preceding state, without any exogenous corrective influence.

Still, I felt I was missing something and now: boom! I figured it out. Once again: among different social interactions there are some which have the property to turn into durable and generative patterns, i.e. they reproduce their general structure in many local instances, each a bit idiosyncratic, yet all based on the same structure. In other words, some among our social interactions have the capacity to be intelligent structures, which experiment with themselves by producing many variations of themselves. This is exactly what artificial neural networks are: they are intelligent structures able to experiment with themselves by generating many local, idiosyncratic variations and thereby nailing down the variation which minimizes error in achieving a desired outcome.

When I use an artificial neural network to simulate social change, I implicitly assume that the social change in question is a Markov chain of states, and that the society under simulation has some structural properties which remain consistent over all the Markov chain of states. Now, I need to list the structural properties of artificial neural networks I use in my research, and to study the conditions of their stability. An artificial neural network is a sequence of equations being run in a loop. Structure of the network is given by each equation separately, and by their sequential order. I am going to break down that logical structure once again and pass its components in review. Just a general, introductory remark: I use really simple neural networks, which fall under the general category of multi-layer perceptron. This is probably the simplest that can be in terms of AI, and this is the logic which I connect to collective intelligence in human societies.

The most fundamental structure of an artificial neural network is given by the definition of input variables – the neural stimuli – and their connection to the output variable(s). I used that optional plural, i.e. the ‘(s)’ suffix, because the basic logic of an artificial neural network assumes defining just one output variable, whilst it is possible to construe that output as the coefficient of a vector. In other words, any desired outcome given by one number can be seen as being derived from a collection of numbers. I hope you remember from your math classes in high school that the Pythagorean theorem, I mean the a2 + b2 = c2 one, has a more general meaning, beyond the simple geometry of a right-angled triangle. Any positive number we observe – our height in centimetres (or in feet and inches), the right amount of salt to season shrimps etc. – any of those amounts can be interpreted as the square root of the sum of squares of two other numbers. I mean, any x > 0 is x = (y2 + x2)0,5. Logically, those shady y and z can be seen, in turn, as derived, Pythagorean way, from even shadier and more mysterious entities. In other words, it is plausible to assume that x = (y2 + x2)0,5 = {[(a2 + b2)0,5]2 + [(c2 + d2)0,5]2}0,5 etc.

As a matter of fact, establishing an informed distinction between input variables on the one hand, and the output variable on the other hand is the core and the purpose of my method. I take a handful of variables, informative about a society or a market, and I make as many alternative neural networks as there are variables. Each alternative network has the same logical structure, i.e. the same equations in the same sequence, but is pegged on a different variable as its output. At some point, I have the real human society, i.e. the original, empirical dataset, and as many alternative versions thereof as there are variables in the dataset. In other words, I have a structure and a finite number of experiments with that structure. This is the methodology I used, for example, in my paper on energy efficiency.

There are human social structures which can make other social structures, by narrowing down, progressively, the residual error generated when trying to nail down a desired outcome and experimenting with small variations of the structure in question. Those structures need abundant social interactions in order to work. An artificial neural network which has the capacity to stay structurally stable, i.e. which has the capacity to keep the Euclidean distance between variables inside a predictable interval, can be representative for such a structure. That predictable interval of Euclidean distance corresponds to predictable behavioural coupling, the so-called correlated coupling: social entity A reacts to what social entity B is doing, and this reaction is like music, i.e. it involves moving along a scale of response in a predictable pattern.

I see cities as factories of social roles. The intensity of social interactions in cities works like a social engine. New businesses emerge, new jobs form in the labour market. All these require new skillsets and yet those skillsets are expected to stop being entirely new and to become somehow predictable and reliable, whence the need for correspondingly new social roles in training and education for those new skills. As people endowed with those new skills progressively take over business and jobs, even more novel skillsets emerge and so the wheel of social change spins. The peculiar thing about social interactions in cities are those between young people, i.e. teenagers and young adults up to the age of 25. Those interactions have a special trait, just as do the people involved: their decision-making processes are marked by significantly greater an appetite for risk and immediate gratification, as opposed to more conservative and more perseverant behavioural patterns in older adults.

Cities allow agglomeration of people very similar as regards the phase of their personal lifecycle, and, in the same time, very different in their cultural background. People mix a lot inside generations. Cities produce a lot of social roles marked with a big red label ‘Only for humans below 30!’, and, in the same time, lots of social roles marked ‘Below 40, don’t even think about it!’. Please, note that I define a generation in sociological terms, i.e. as a cycle of about 20 ÷ 25 years, roughly corresponding to the average age of reproduction (I know, first parenthood sounds kind’a more civilized). According to this logic, I am one generation older than my son.

That pattern of interactions is almost the exact opposite of rural villages and small towns, where people interact much more between generations and less inside generations. Social roles form as ‘Whatever age you are between 20 and 80, you do this’. As we compare those two mechanisms of role-formation, in turns out that cities are inherently prone to creating completely new sets of social roles for each new generation of people coming with the demographic tide. Cities facilitate innovation at the behavioural level. By innovation, I mean the invention of something new combined with a mechanism of diffusing that novelty across the social system.

These are some of my thoughts about cities. How can I play them out into my teaching? I start with a staple course of mine: microeconomics. Microeconomics sort of nicely fit with the topic of cities, and I don’t even have to prove it, ‘cause Adam Smith did. In his ‘Inquiry Into The Nature And Causes of The Wealth of Nations’, Book I, Chapter III, entitled ‘That The Division Of Labour Is Limited By The Extent Of The Market’, he goes: ‘[…] There are some sorts of industry, even of the lowest kind, which can be carried on nowhere but in a great town. A porter, for example, can find employment and subsistence in no other place. A village is by much too narrow a sphere for him; even an ordinary market-town is scarce large enough to afford him constant occupation. In the lone houses and very small villages which are scattered about in so desert a country as the highlands of Scotland, every farmer must be butcher, baker, and brewer, for his own family. In such situations we can scarce expect to find even a smith, a carpenter, or a mason, within less than twenty miles of another of the same trade. The scattered families that live at eight or ten miles distance from the nearest of them, must learn to perform them- selves a great number of little pieces of work, for which, in more populous countries, they would call in the assistance of those workmen. Country workmen are almost everywhere obliged to apply themselves to all the different branches of industry that have so much affinity to one another as to be employed about the same sort of materials. A country carpenter deals in every sort of work that is made of wood; a country smith in every sort of work that is made of iron. The former is not only a carpenter, but a joiner, a cabinet-maker, and even a carver in wood, as well as a wheel-wright, a plough-wright, a cart and waggon-maker. The employments of the latter are still more various. It is impossible there should be such a trade as even that of a nailer in the remote and inland parts of the highlands of Scotland. Such a workman at the rate of a thousand nails a-day, and three hundred working days in the year, will make three hundred thousand nails in the year. But in such a situation it would be impossible to dispose of one thousand, that is, of one day’s work in the year […]’.     

Microeconomics can be seen as a science of how some specific social structures, strongly pegged in the social distinction between cities and the countryside, reproduce themselves in time, as well as produce other social structures. I know, this definition does not really seem to fall close to the classical, Marshallian graph of two curves, i.e. supply and demand, crossing nicely in the point of equilibrium. ‘Does not seem to…’ is distinct from ‘does not’. Let’s think a moment. The local {Supply <> Demand} equilibrium is a state of deals being closed at recurrent, predictable a price. One of the ways to grasp the equilibrium price consists in treating it as the price which clears all the surplus stock of goods in the market. It is the price which people agree upon, at the end of the day. Logically, there is an underlying social structure which allows such a recurrent, equilibrium-making bargaining process. This structure reproduces itself in n copies, over and over again, and each such copy is balanced on different a coupling between equilibrium price and equilibrium product.

Here comes something I frequently remind to those of my students who have enough grit to read any textbook in economics: those nice curves in the Marshallian graph, namely demand and supply, don’t really exist. They represent theoretical states at best, and usually these are more in the purely hypothetical department. We just guess that social reality is being sort bent along them. The thing that really exists, here and now, is the equilibrium price that we strike our deals at, and the corresponding volumes of business we do at this price. What really exists in slightly longer a perspective is the social structure able to produce local equilibriums between supply and demand, which, in turn, requires people in that structure recurrently producing economically valuable, tradable surpluses of physical goods and/or marketable skills.

Question: how can I know there is any point in producing an economically valuable surplus of anything? Answer: where other people make me understand they would gladly acquire said surplus. Mind you, although markets are mostly based on money, there are de facto markets without straightforward monetary payment. The example which comes to my mind is a structure which I regularly observe, every now and then, in people connected to business and politics, especially in Warsaw, the capital of my home country, Poland. Those guys (and gals) sometimes call it ‘the cooperative of information and favour’. You slightly facilitate a deal I want to strike, and I remember that, and later I facilitate the deal you want to strike. We don’t do business together, strictly speaking, we just happen to have mutual leverage on each other’s business with third parties. I observed that pattern frequently, and the thing really works as a market of favours based on social connections and individual knowledge. No one exchanges money (that could be completely accidentally perceived as corruption, and that perfectly accidental false perception could occur in a prosecutor, and no one wants to go to jail), and yet this is a market. There is an equilibrium price for facilitating a $10 million deal in construction. That equilibrium price might be the facilitation of another $10 million deal in construction, or the facilitation of someone being elected to the city council. By the way, that market of favours really stirs it up when some kind of elections is upcoming.

Anyway, the more social interactions I enter into over a unit of time, the more chances I have to spot some kind of economically valuable surplus in what I do and make. The more such social interactions are possible in the social structure of my current residence, the better. Yes, cities allow that. The next step is from those general thoughts to a thread of teaching and learning. I can see a promising avenue in the following scheme:

>>> Step 1: I choose or ask my students to choose any type of normal, recurrent social interaction. It can be interesting to film a bit of city life, just like that, casually, with a phone, and then use it as empirical material.

>>> Step 2: Students decompose that interaction into layers of different consistency, i.e. separate actions and events which change quickly and frequently from those which last and recur.

>>> Step 3: Students connect the truly recurrent actions and events to an existing market of goods or marketable skills. They describe, with as much detail as possible, how recurrent interactions translate into local states of equilibrium.

Good. One carryover done, namely into microeconomics, I try another one, into another one of my basic courses at the university: fundamentals of management. There is something I try to tell my students whenever I start this course, in October: ‘Guys, I can barely outline what management is. You need to go out there, into that jungle, and then you learn. I can tell you what the jungle looks like, sort of in general’. Social interactions and social roles in management spell power, hierarchy, influence, competition and cooperation on the top of all that. Invariably, students ask me: ‘But, sir, wouldn’t it be simpler just to cooperate, without all those games of power and hierarchy inside the organization?’. My answer is that yes, indeed, it would be simpler to the point of being too simple, i.e. simplistic. Let’s think. When we rival inside the organization, we need to interact. There is no competition without interaction. The more we compete, the more we interact, and the more personal resources we need to put in that interaction.

Mind you, competition is not the only way to trigger intense, abundant human interaction. Camaraderie, love, emotional affiliation to a common goal – they all can do the same job, and they tend to be more pleasant than interpersonal competition. There is a caveat, though: all those forms of action-generating emotional bonds between human beings tend to be fringe phenomena. They happen rarely. With how many people, in our existence, can we hope to develop a bond of the type ‘I have your back and you have my back, no matter what’? Just a few, at best. Quite a number of persons walk through their entire life without ever experiencing this type of connection. On the other hand, competition is a mainstream phenomenon. You put 5 random people in any durable social relation – business, teamwork, art etc. – and they are bound to develop competitive behaviour. Competition happens naturally, very frequently, and can trigger tacit coordination when handled properly.

Yes, right, you can legitimately ask what does it mean to handle competition properly. As a kid, or in your teenage years, have you ever played a competitive game, such as tennis, basketball, volleyball, chess, computer games, or even plain infantile fighting? Do you know that situation when other people want to play with you because you sometimes score and win, but kind of not all the time and not at all price? That special state when you get picked for the next game, and you like the feeling? Well, that’s competition handled properly. You mobilise yourself in rivalry with other people, but you keep in mind that the most fundamental rule of any competitive game is to keep the door open for future games.      

Thus, I guess that teaching management in academia, which I very largely do, may consist in showing my students how to compete constructively inside an organisation, i.e. how to be competitive and cooperative in the same time. I can show internal competition and cooperation in the context of a specific business case. I already tend to work a lot, in class, with cases such as Tesla, Netflix, Boeing or Walt Disney. I can use their business description, such as can be found in an annual report, to reconstruct an organisational environment where competition and cooperation can take place. The key learning for management students is to understand what traits of that environment enable constructive competition, likely to engender cooperation, as opposed to situations marked either with destructive competition or with a destructive absence thereof, on the other hand.

Discover Social Sciences is a scientific blog, which I, Krzysztof Wasniewski, individually write and manage. If you enjoy the content I create, you can choose to support my work, with a symbolic $1, or whatever other amount you please, via MY PAYPAL ACCOUNT.  What you will contribute to will be almost exactly what you can read now. I have been blogging since 2017, and I think I have a pretty clearly rounded style.

In the bottom on the sidebar of the main page, you can access the archives of that blog, all the way back to August 2017. You can make yourself an idea how I work, what do I work on and how has my writing evolved. If you like social sciences served in this specific sauce, I will be grateful for your support to my research and writing.

‘Discover Social Sciences’ is a continuous endeavour and is mostly made of my personal energy and work. There are minor expenses, to cover the current costs of maintaining the website, or to collect data, yet I want to be honest: by supporting ‘Discover Social Sciences’, you will be mostly supporting my continuous stream of writing and online publishing. As you read through the stream of my updates on https://discoversocialsciences.com , you can see that I usually write 1 – 3 updates a week, and this is the pace of writing that you can expect from me.

Besides the continuous stream of writing which I provide to my readers, there are some more durable takeaways. One of them is an e-book which I published in 2017, ‘Capitalism And Political Power’. Normally, it is available with the publisher, the Scholar publishing house (https://scholar.com.pl/en/economics/1703-capitalism-and-political-power.html?search_query=Wasniewski&results=2 ). Via https://discoversocialsciences.com , you can download that e-book for free.

Another takeaway you can be interested in is ‘The Business Planning Calculator’, an Excel-based, simple tool for financial calculations needed when building a business plan.

Both the e-book and the calculator are available via links in the top right corner of the main page on https://discoversocialsciences.com .

You might be interested Virtual Summer Camps, as well. These are free, half-day summer camps will be a week-long, with enrichment-based classes in subjects like foreign languages, chess, theatre, coding, Minecraft, how to be a detective, photography and more. These live, interactive classes will be taught by expert instructors vetted through Varsity Tutors’ platform. We already have 200 camps scheduled for the summer.   https://www.varsitytutors.com/virtual-summer-camps

Second-hand observations

MY EDITORIAL ON YOU TUBE

I keep reviewing, upon the request of the International Journal of Energy Sector Management (ISSN1750-6220), a manuscript entitled ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’. I have already formulated my first observations on that paper in the last update: I followed my suspects home, where I mostly focused on studying the theoretical premises of the model used in the paper under review, or rather of a model used in another article, which the paper under review heavily refers to.

As I go through that initial attempt to review this manuscript, I see I was bitching a lot, and this is not nice. I deeply believe in the power of eristic dialogue, and I think that being artful in verbal dispute is different from being destructive. I want to step into the shoes of those authors, technically anonymous to me (although I can guess who they are by their bibliographical references), who wrote ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’. When I write a scientific paper, my conclusion is essentially what I wanted to say from the very beginning, I just didn’t know how to phrase that s**t out. All the rest, i.e. introduction, mathematical modelling, empirical research – it all serves as a set of strings (duct tape?), which help me attach my thinking to other people’s thinking.

I assume that people who wrote ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’ are like me. Risky but sensible, as assumptions come. I start from the conclusion of their paper, and I am going to follow upstream. When I follow upstream, I mean business. It is not just about going upstream the chain of paragraphs: it is going upstream the flow of language itself. I am going to present you a technique I use frequently when I really want to extract meaning and inspiration from a piece of writing. I split that writing into short paragraphs, like no more than 10 lines each. I rewrite each such paragraph in inverted syntax, i.e. I rewrite from the last word back to the first one. It gives something like Master Yoda speaking: bullshit avoid shall you. I discovered by myself, and this is backed by the science of generative grammar, that good writing, when read backwards, uncovers something like a second layer of meaning, and that second layer is just as important as the superficial one.

I remember having applied this method to a piece of writing by Deepak Chopra. It was almost wonderful how completely meaningless that text was when read backwards. There was no second layer. Amazing.

Anyway, now I am going to demonstrate the application of that method to the conclusion of ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’. I present paragraphs of the original text in Times New Roman Italic. I rewrite the same paragraphs in inverted syntax with Times New Roman Bold. Under each such pair ‘original paragraph + inverted syntax’ I write my second-hand observations inspired by those two layers of meaning, and those observations of mine come in plain Times New Roman.

Let’s dance.

Original text: Although increasing the investment in energy reduction can effectively improve the environmental quality; in a relatively short period of time, the improvement of environmental quality is also very obvious; but in the long run, the the influence of the system (1). In this study, the energy, economic and environmental (3E) four-dimensional system model of energy conservation constraints was first established. The Bayesian estimation method is used to correct the environmental quality variables to obtain the environmental quality data needed for the research.

Inverted syntax: Research the for needed data quality environmental the obtain to variables quality environmental the correct to used is method estimation Bayesian the established first was constraints conservation energy of model system dimensional four 3E environmental and economic energy the study this in system the of influence run long the in but obvious very also is quality environmental of improvement the time of period short relatively a in quality environmental the improve effectively can reduction energy in investment the increasing although.

Second-hand observations: The essential logic of using Bayesian methodology is to reduce uncertainty in an otherwise weakly studied field, and to set essential points for orientation. A Bayesian function essentially divides reality into parts, which correspond to, respectively, success and failure.

It is interesting that traits of reality which we see as important – energy, economy and environmental quality – can be interpreted as dimensions of said reality. It corresponds to the Interface Theory of Perception (ITP): it pays off to build a representation of reality based on what we want rather than on what we perceive as absolute truth.    

Original text: In addition, based on the Chinese statistical yearbook data, the Levenberg-Marquardt BP neural network method optimized by genetic algorithm is used to energy, economy and environment under energy conservation constraints. The parameters in the four-dimensional system model are effectively identified. Finally, the system science analysis theory and environment is still deteriorating with the decline in the discount rate of energy reduction inputs.

Inverted syntax: Inputs reduction energy of rate discount the in decline the with deteriorating still is environment and theory analysis science system the finally identified effectively are model system dimensional four the in parameters the constraints conservation energy under environment and economy energy to used is algorithm genetic by optimized method network neural Levenberg-Marquardt Backpropagation the data yearbook statistical Chinese the on based addition in.

Second-hand observations: The strictly empirical part of the article is relatively the least meaningful. The Levenberg-Marquardt BP neural network is made for quick optimization. It is essentially the method of Ordinary Least Squares transformed into a heuristic algorithm, and it can optimize very nearly anything. When using the Levenberg-Marquardt BP neural network we risk overfitting (i.e. hasty conclusions based on a sequence of successes) rather than the inability to optimize. It is almost obvious that – when trained and optimized with a genetic algorithm – the network can set such values in the model which allow stability. It simply means that the model has a set of values that virtually eliminate the need for adjustment between particular arguments, i.e. that the model is mathematically sound. On the other hand, it would be intellectually risky to attach too much importance to the specific values generated by the network. Remark: under the concept of ‘argument’ in the model I mean mathematical expressions of the type: [coefficient]*[parameter]*[local value in variable].

The article conveys an important thesis, namely that the rate of return on investment in environmental improvement is important for sustaining long-term commitment to such improvement.  

Original text: It can be better explained that effective control of the peak arrival time of pollution emissions can be used as an important decision for pollution emission control and energy intensity reduction; Therefore, how to effectively and reasonably control the peak of pollution emissions is of great significance for controlling the stability of Energy, Economy and Environment system under the constraint of energy reduction, regulating energy intensity, improving environmental quality and sustainable development.

Inverted syntax: Development sustainable and quality environmental improving intensity energy regulating reduction energy of constraint the under system environment and economy energy of stability the controlling for significance great of is emissions pollution of peak the control reasonably and effectively to how therefore reduction intensity energy and control emission pollution for decision important an as used be can emissions pollution of time arrival peak the of control effective that explained better be can.

Second-hand observations: This is an interesting logic: we can control the stability of a system by controlling the occurrence of peak states. Incidentally, it is the same logic as that used during the COVID-19 pandemic. If we can control the way that s**t unfolds up to its climax, and if we can make that climax somewhat less steep, we have an overall better control over the whole system.

Original text: As the environmental capacity decreases, over time, the evolution of environmental quality shows an upward trend of fluctuations and fluctuations around a certain central value; when the capacity of the ecosystem falls to the limit, the system collapses. In order to improve the quality of the ecological environment and promote the rapid development of the economy, we need more measures to use more means and technologies to promote stable economic growth and management of the ecological environment.

Inverted syntax: Environment ecological the of management and growth economic stable promote to technologies and means more use to measures more need we economy the of development rapid the promote and environment ecological the of quality the improve to order in collapse system the limit the to falls ecosystem the of capacity the when value central a around fluctuations and fluctuations of trend upward an shows quality environmental of evolution the time over decreases capacity environmental the as.    

Second-hand observations: We can see more of the same logic: controlling a system means avoiding extreme states and staying in a zone of proximal development. As the system reaches the frontier of its capacity, fluctuations amplify and start drawing an upward drift. We don’t want such a drift. The system is the most steerable when it stays in a somewhat mean-reverted state.  

I am summing up that little exercise of style. The authors of ‘Evolutionary Analysis of a Four-dimensional Energy-Economy-Environment Dynamic System’ claim that relations between economy, energy and environment are a complex, self-regulating system, yet the capacity of that system to self-regulate is relatively the most pronounced in some sort of central expected states thereof, and fades as the system moves towards peak states. According to this logic, our relations with ecosystems are always somewhere between homeostasis and critical disaster, and those relations are the most manageable when closer to homeostasis. A predictable, and economically attractive rate of return in investments that contribute to energy savings seems to be an important component of that homeostasis.

The claim in itself is interesting and very largely common-sense, although it goes against some views, that e.g. in order to take care of the ecosystem we should forego economics. Rephrased in business terms, the main thesis of ‘Evolutionary Analysis of a Four-dimensional Energy-Economy-Environment Dynamic System’ is that we can manage that dynamic system as long as it involves project management much more than crisis-management. When the latter prevails, things get out of hand. The real intellectual rabbit hole starts when one considers the method of proving the veracity of that general thesis.  The authors build a model of non-linear connections between volume of pollution, real economic output, environmental quality, and constraint on energy reduction. Non-linear connections mean that output variables of the model –  on the left side of each equation – are rates of change over time in each of the four variables. Output variables in the model are strongly linked, via attractor-like mathematical arguments on the input side, i.e. arguments which combine coefficients strictly speaking with standardized distance from pre-defined peak values in pollution, real economic output, environmental quality, and constraint on energy reduction. In simpler words, the theoretical model built by the authors of ‘Evolutionary Analysis of a Four-dimensional Energy-Economy-Environment Dynamic System’ resembles a spiderweb. It has distant points of attachment, i.e. the peak values, and oscillates between them.

It is interesting how this model demonstrates the cognitive limitations of mathematics. If we are interested in controlling relations between energy, economy, and environment, our first, intuitive approach is to consider these desired outcomes as dimensions of our reality. Yet, those dimensions could be different. If I want to become a top-level basketball player, I does not necessarily mean that social reality is truly pegged on a vector of becoming-a-top-level-basketball-player. Social mechanics might be structured around completely different variables. Still, with strong commitment, this particular strategy might work. Truth is not the same as payoffs from our actions. A model of relations energy-economy-environment pegged on desired outcomes in these fields might be essentially false in ontological terms, yet workable as a tool for policy-making. This approach is validated by the Interface Theory of Perception (see, e.g. Hoffman et al. 2015[1] and Fields et al. 2018[2]).

From the formal-mathematical point of view, the model is construed as a square matrix of complex arguments, i.e. the number of arguments on the left, input side of each equation is the same as the number of equations, whence the possibility to make a Jacobian matrix thereof, and to calculate its eigenvalues. The authors preset the coefficients of the model, and the peak-values so as to test for stability. Testing the model with those preset values demonstrates an essential lack of stability in the such-represented system. Stability is further tested by studying the evolution trajectory of the system. The method of evolution trajectory, in this context, seems referring to life sciences and the concept of phenotypic trajectory (see e.g. Michael & Dean 2013[3]), and shows that the system, such as modelled, is unstable. Its evolution trajectory can change in an irregular and chaotic way.

In a next step, the authors test their model with empirical data regarding China between 2000 and 2017. They use a Levenberg–Marquardt Backpropagation Network in order to find the parameters of the system. With thus-computed parameters, and the initial state of the system set on data from 1980, evolution trajectory of the system proves stable, in a multi cycle mode.

Now, as I have passed in review the logic of ‘Evolutionary Analysis of a Four-dimensional Energy-Economy-Environment Dynamic System’, I start bitching again, i.e. I point at what I perceive as, respectively, strengths and weaknesses of the manuscript. After reading and rereading the paper, I come to the conclusion that the most valuable part thereof is precisely the use of evolution trajectory as theoretical vessel. The value added I can see here consists in representing something complex that we want – we want our ecosystem not to kill us (immediately) and we want our smartphones and our power plants working as well – in a mathematical form, which can be further processed along the lines of evolution trajectory.

That inventive, intellectually far-reaching approach is accompanied, however, by several weaknesses. Firstly, it is an elaborate path leading to common-sense conclusions, namely that managing our relations with the ecosystem is functional as long as it takes the form of economically sound project management, rather than crisis management. The manuscript seems to be more of a spectacular demonstration of method rather than discovery in substance.

Secondly, the model, such as is presented in the manuscript, is practically impossible to decipher without referring to the article Zhao, L., & Otoo, C. O. A. (2019). Stability and Complexity of a Novel Three-Dimensional Environmental Quality Dynamic Evolution System. Complexity, 2019, https://doi.org/10.1155/2019/3941920 . When I say ‘impossible’, it means that the four equations of the model under review are completely cryptic, as the authors do not explain their mathematical notation at all, and one has to look into this Zhao, L., & Otoo, C. O. A. (2019) paper in order to make any sense of it.

After cross-referencing those two papers and the two models, I obtain quite a blurry picture. In this Zhao, L., & Otoo, C. O. A. (2019) we have  a complex, self-regulating system made of 3 variables: volume of pollution x(t), real economic output y(t), and environmental quality z(t). The system goes through an economic cycle of k periods, and inside the cycle those three variables reach their respective local maxima and combine into complex apex states. These states are: F = maxk[x(t)], E = maxk[y(t)], H = maxk(∆z/∆y) – or the peak value of the impact of economic growth 𝑦(𝑡) on environmental quality 𝑧(𝑡) –  and P stands for absolute maximum of pollution possible to absorb by the ecosystem, thus something like P = max(F). With those assumptions in mind, the original model by Zhao, L., & Otoo, C. O. A. (2019), which, for the sake of presentational convenience I will further designate as Model #1, goes like:  

d(x)/d(t) = a1*x*[1 – (x/F)] + a2*y*[1 – (y/E)] – a3*z

d(y)/d(t) = -b1*x – b2*y – b3*z

d(z)/d(t) = -c1*x + c2*y*[1 – (y/H)] + c3*z*[(x/P) – 1]

The authors of ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’ present a different model, which they introduce as an extension of that by Zhao, L., & Otoo, C. O. A. (2019). They introduce a 4th variable, namely energy reduction constraints designated as w(t). There is no single word as for what does it exactly mean. The first moments over time of, respectively, x(t), y(t), z(t), and w(t) play out as in Model #2:

d(x)/d(t)= a1*x*[(y/M) – 1] – a2*y + a3*z + a4w

d(y)/d(t) = -b1*x + b2*y*[1 – (y/F)] + b3*z*[1 – (z/E)] – b4*w

d(z)/d(t) = c1*x*[(x/N) – 1] – c2*y – c3*z – c4*w

d(w)/d(t) = d1*x – d2*y + d3*z*[1 – (z/H)] + d4*w*[(y/P) – 1]

No, I have a problem. When the authors of ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’ present their Model #2 as a simple extension of Model #1, this is simply not true. First of all, Model #2 contains two additional parameters, namely M and N, which are not explained at all in the paper. They are supposed to be normal numbers and that’s it. If I follow bona fide the logic of Model #1, M and N should be some kind of maxima, simple or compound. It is getting interesting. I have a model with two unknown maxima and 4 known ones, and I am supposed to understand the theory behind. Cool. I like puzzles.

The parameter N is used in the expression [(x/N) – 1]. We divide the local volume of pollution x by N, thus we do x/N, and this is supposed to mean something. If we keep any logic at all, N should be a maximum of pollution, yet we already have two maxima of pollution, namely F and P. As for M, it is used in the expression [(y/M) – 1] and therefore I assume M is a maximum state of real economic output ‘y’. Once again, we already have one maximum in that category, namely ‘E’. Apparently, the authors of Model #2 assume that the volume of pollution x(t) can have three different, meaningful maxima, whilst real economic output y(t) has two of them. I will go back to those maxima further below, when I discuss the so-called ‘attractor expressions’ contained in Model #2.

Second of all, Model #2 presents a very different logic than Model #1. Arbitrary signs of coefficients ai, bi, ci and di are different, i.e. minuses replace pluses and vice versa. Attractor expressions of the type [(a/b) – 1] or [1 – (a/b)] are different, too. I am going to stop by these ones a bit longer, as it is important regarding the methodology of science in general. When I dress a hypothesis like y = a*x1 + b*x2, coefficients ‘a’ and ‘b’ are neutral in the sense that if x1 > 0, then a*x1 > 0 as well etc. In other words, positive coefficients ‘a’ and ‘b’ do not imply anything about the relation between y, x1, and x2.

On the other hand, when I say y = -a*x1 + b*x2, it is different. Instead of having a coefficient ‘a’, I have a coefficient ‘-a’, thus opposite to ‘y’. If x1 > 0, then a*x1 < 0 and vice versa. By assigning a negative coefficient to phenomenon x, I assume it works as a contrary force to phenomenon y. A negative coefficient is already a strong assumption. As I go through all the arbitrarily negative coefficients in Model #2, I can see the following strong claims:

>>> Assumption 1: the rate of change in the volume of pollution d(x)/d(t) is inversely proportional to the real economic output y.

>>> Assumption 2: the rate of change in real economic output d(y)/d(t) is inversely proportional to the volume of pollution x

>>> Assumption 3: the rate of change in real economic output d(y)/d(t) is inversely proportional to energy reduction constraints w.

>>> Assumption 4: the rate of change in environmental quality d(z)/d(t) is inversely proportional to environmental quality z.

>>> Assumption 5: the rate of change in environmental quality d(z)/d(t) is inversely proportional to real economic output y.

>>> Assumption 6: the rate of change in environmental quality d(z)/d(t) is inversely proportional to the volume of pollution x.

>>> Assumption 7: the rate of change in energy reduction constraints d(w)/d(t) is inversely proportional to real economic output y.

These assumptions would greatly benefit from theoretical discussion, as some of them, e.g. Assumption 1 and 2, seem counterintuitive.

Empirical data presented in ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’ is probably the true soft belly of the whole logical structure unfolding in the manuscript. Authors present that data in the standardized form, as constant-base indexes, where values from 1999 are equal to 1. In Table 1 below, I present those standardized values:

Table 1

YearX – volume of pollutionY – real economic outputZ – environmental qualityW – energy reduction constraints
20001,96261,04551,10850,9837
20013,67861,10661,22280,9595
20022,27911,20641,34820,9347
20031,26991,4021,52830,8747
20042,30331,63821,80620,7741
20051,93521,85942,08130,7242
20062,07782,0382,45090,6403
20073,94372,21563,03070,5455
20086,55832,28083,59760,4752
20093,2832,39123,89970,4061
20103,33072,56564,6020,3693
20113,68712,75345,42430,3565
20124,0132,86086,03260,321
20133,92742,96596,60680,2817
20144,21673,02927,21510,2575
20154,28933,05837,68130,2322
20164,59253,10048,28720,2159
20174,61213,19429,22970,2147

I found several problems with that data, and they sum up to one conclusion: it is practically impossible to check its veracity. The time series of real economic output seem to correspond to some kind of constant-price measurement of aggregate GDP of China, yet it does not fit the corresponding time series published by the World Bank (https://data.worldbank.org/indicator/NY.GDP.MKTP.KD ). Metrics such as ‘environmental quality’ (x) or energy reduction constraints (w) are completely cryptic. Probably, they are some sort of compound indices, and their structure in itself requires explanation.

There seems to be a logical loop between the theoretical model presented in the beginning of the manuscript, and the way that data is presented. The model presents an important weakness as regards functional relations inside arguments based on peak values, such as ‘y/M’ or ‘y/P’. The authors very freely put metric tons of pollution in fractional relation with units of real output etc. This is theoretical freestyle, which might be justified, yet requires thorough explanation and references to literature. Given the form that data is presented under, a suspicion arises, namely that standardization, i.e. having driven all data to the same denomination, opened the door to those strange, cross-functional arguments. It is to remember that even standardized through common denomination, distinct phenomena remain distinct. A mathematical trick is not the same as ontological identity.

Validation of the model with a Levenberg–Marquardt Backpropagation Network raises doubts, as well. This specific network, prone to overfitting, is essentially a tool for quick optimization in a system which we otherwise thoroughly understand. This is the good old method of Ordinary Least Squares translated into a sequence of heuristic steps. The LM-BN network does not discover anything about the system at hand, it just optimizes it as quickly as possible.

In a larger perspective, using a neural network to validate a model implies an important assumption, namely that consecutive states of the system form a Markov chain, i.e. each consecutive state is exclusively the outcome of the preceding state. It is to remember that neural networks in general are artificial intelligence, and intelligence, in short, means that we figure out what to do when we have no clue as for what to do, and we do it without any providential, external guidelines. The model presented by the authors clearly pegs the system on hypothetical peak values. These are exogenous to all but one possible state of the system, whence a logical contradiction between the model and the method of its validation.

Good. After some praising and some bitching, I can assess the manuscript by answering standard questions asked by the editor of the International Journal of Energy Sector Management (ISSN1750-6220).

  1. Originality: Does the paper contain new and significant information adequate to justify publication?

The paper presents a methodological novelty, i.e. the use of evolution trajectory as a method to study complex social-environmental systems, and this novelty deserves being put in the spotlight even more than it is in the present version of the paper. Still, substantive conclusions of the paper do not seem original at all.  

  • Relationship to Literature: Does the paper demonstrate an adequate understanding of the relevant literature in the field and cite an appropriate range of literature sources? Is any significant work ignored?

The paper presents important weaknesses as for bibliographical referencing. First of all, there are clear theoretical gaps as regards the macroeconomic aspect of the model presented, and as regards the nature and proper interpretation of the empirical data used for validating the model. More abundant references in these two fields would be welcome, if not necessary.

Second of all, the model presented by the authors is practically impossible to understand formally without reading another paper, referenced is a case of exaggerate referencing. The paper should present its theory in a complete way.   

  • Methodology: Is the paper’s argument built on an appropriate base of theory, concepts, or other ideas? Has the research or equivalent intellectual work on which the paper is based been well designed? Are the methods employed appropriate?

The paper combines a very interesting methodological approach, i.e. the formulation of complex systems in a way that makes them treatable with the method of evolution trajectory, with clear methodological weaknesses. As for the latter, three main questions emerge. Firstly, it seems to be methodologically incorrect to construe the cross-functional attractor arguments, where distinct phenomena are denominated one over the other. Secondly, the use of LM-BN network as a tool for validating the model is highly dubious. This specific network is made for quick optimization of something we understand and not for discovery inside something we barely understand.

Thirdly, the use of a neural network of any kind implies assuming that consecutive states of the system form a Markov chain, which is logically impossible with exogenous peak-values preset in the model.

  • Results: Are results presented clearly and analysed appropriately? Do the conclusions adequately tie together the other elements of the paper?

The results are clear, yet their meaning seems not to be fully understood. Coefficients calculated via a neural network represent a plausibly possible state of the system. When the authors conclude that the results so-obtained, combined with the state of the system from the year 1980, it seems really stretched in terms of empirical inference.

  • Implications for research, practice and/or society: Does the paper identify clearly any implications for research, practice and/or society? Does the paper bridge the gap between theory and practice? How can the research be used in practice (economic and commercial impact), in teaching, to influence public policy, in research (contributing to the body of knowledge)? What is the impact upon society (influencing public attitudes, affecting quality of life)? Are these implications consistent with the findings and conclusions of the paper?

The paper presents more implications for research than for society. As stated before, substantive conclusions of the paper boil down to common-sense claims, i.e. that it is better to keep the system stable rather than unstable. On the other hand, some aspects of the method used, i.e. the application of evolutionary trajectory, seem being very promising for the future. The paper seems to create abundant, interesting openings for future research rather than practical applications for now.

  • Quality of Communication: Does the paper clearly express its case, measured against the technical language of the field and the expected knowledge of the journal’s readership? Has attention been paid to the clarity of expression and readability, such as sentence structure, jargon use, acronyms, etc.

The quality of communication is a serious weakness in this case. The above-mentioned exaggerate reference to Zhao, L., & Otoo, C. O. A. (2019). Stability and Complexity of a Novel Three-Dimensional Environmental Quality Dynamic Evolution System. Complexity, 2019, https://doi.org/10.1155/2019/3941920 is one point. The flow of logic is another. When, for example, the authors suddenly claim (page 8, top): ‘In this study we set …’, there is no explanation why and on what theoretical grounds they do so.

Clarity and correct phrasing clearly lack as regards all the macroeconomic aspect of the paper. It is truly hard to understand what the authors mean by ‘economic growth’.

Finally, some sentences are clearly ungrammatical, e.g. (page 6, bottom): ‘By the system (1) can be launched energy intensity […].   

Good. Now, you can see what a scientific review looks like. I hope it was useful. Discover Social Sciences is a scientific blog, which I, Krzysztof Wasniewski, individually write and manage. If you enjoy the content I create, you can choose to support my work, with a symbolic $1, or whatever other amount you please, via MY PAYPAL ACCOUNT.  What you will contribute to will be almost exactly what you can read now. I have been blogging since 2017, and I think I have a pretty clearly rounded style.

In the bottom on the sidebar of the main page, you can access the archives of that blog, all the way back to August 2017. You can make yourself an idea how I work, what do I work on and how has my writing evolved. If you like social sciences served in this specific sauce, I will be grateful for your support to my research and writing.

‘Discover Social Sciences’ is a continuous endeavour and is mostly made of my personal energy and work. There are minor expenses, to cover the current costs of maintaining the website, or to collect data, yet I want to be honest: by supporting ‘Discover Social Sciences’, you will be mostly supporting my continuous stream of writing and online publishing. As you read through the stream of my updates on https://discoversocialsciences.com , you can see that I usually write 1 – 3 updates a week, and this is the pace of writing that you can expect from me.

Besides the continuous stream of writing which I provide to my readers, there are some more durable takeaways. One of them is an e-book which I published in 2017, ‘Capitalism And Political Power’. Normally, it is available with the publisher, the Scholar publishing house (https://scholar.com.pl/en/economics/1703-capitalism-and-political-power.html?search_query=Wasniewski&results=2 ). Via https://discoversocialsciences.com , you can download that e-book for free.

Another takeaway you can be interested in is ‘The Business Planning Calculator’, an Excel-based, simple tool for financial calculations needed when building a business plan.

Both the e-book and the calculator are available via links in the top right corner of the main page on https://discoversocialsciences.com .

You might be interested Virtual Summer Camps, as well. These are free, half-day summer camps will be a week-long, with enrichment-based classes in subjects like foreign languages, chess, theater, coding, Minecraft, how to be a detective, photography and more. These live, interactive classes will be taught by expert instructors vetted through Varsity Tutors’ platform. We already have 200 camps scheduled for the summer.   https://www.varsitytutors.com/virtual-summer-camps .


[1] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[2] Fields, C., Hoffman, D. D., Prakash, C., & Singh, M. (2018). Conscious agent networks: Formal analysis and application to cognition. Cognitive Systems Research, 47, 186-213. https://doi.org/10.1016/j.cogsys.2017.10.003

[3] Michael, L. C., & Dean, C. A. (2013). Phenotypic trajectory analysis: comparison of shape change patterns in evolution and ecology. https://doi.org/10.4404/hystrix-24.1-6298

I followed my suspects home

MY EDITORIAL ON YOU TUBE

I am putting together the different threads of thinking which I have developed over the last weeks. I am trying to make sense of the concept of collective intelligence, and to make my science keep up with the surrounding reality. If you have followed my latest updates, you know I yielded to the temptation of taking a stance regarding current events (see Dear Fanatics and We suck our knowledge about happening into some kind of patterned structure ). I have been struggling, and I keep struggling with balancing science with current observation. Someone could say: ‘But, prof, isn’t science based on observation?’. Yes, indeed it is, but science is like a big lorry: it has a wide turning radius, because it needs to make sure to incorporate current observation into a conceptual network which takes a step back from current observation. I am committed to science, and, in the same time, I am trying to take as tight a turning around the current events as possible. Tires screech on the tarmac, synapses flare… It is exciting and almost painful, all in the same time. I think we need it, too: that combination of involvement and distance.

Once again, I restate my reasons for being concerned about the current events. First of all, I am a human being, and when I see and feel the social structure rattling around me, I become a bit jumpy. Second of all, as the Black Lives Matter protests spill from the United States to Europe, I can see the worst European demons awakening. Fanatic leftist anarchists, firmly believing they will change the world for the better by destroying it, pave the way for right-wing fanatics, who, in turn, firmly believe that order is all we need and order tastes the best when served with a pinchful of concentration camps. When I saw some videos published by activists from the so-called Chaz, or Capitol-Hill-Autonomous-Zone, in Seattle, Washingtion, U.S., I had a déjà vu. ‘F**k!’ I thought ‘This is exactly what I remember about life in a communist country’. A bunch of thugs with guns control the borders and decide who can get in and out. Behind them, a handful of grandstanding, useful idiots with big signs: ‘Make big business pay’, ‘We want restorative justice’ etc. In the background, thousands of residents – people who simply want to live their lives the best they can – are trapped in that s**t. This is what I remember from the communist Poland: thugs with deadly force using enthusiastic idiots to control local resources.

Let me be clear: I know people in Poland who use the expression ‘black apes’ to designate black people. I am sorry when I hear things like that. That one hurts too. Still, people who say stupid s**t can be talked into changing their mind. On the other hand, people who set fire to other people’s property and handle weapons are much harder to engage into a creative exchange of viewpoints. I think it is a good thing to pump our brakes before we come to the edge of the cliff. If we go over that edge, it will dramatically slow down positive social change instead of speeding it up.   

Thirdly, events go the way I was slightly afraid they would when the pandemic started, and lockdowns were being instated. The sense of danger combined with the inevitable economic downturn make a perfect tinderbox for random explosions. This is social change experienced from the uncomfortable side. When I develop my research about cities and their role in our civilisation, I frequently refer to the concept of collective intelligence. I refer to cities as a social contrivance, supposed to work as creators of new social roles, moderators of territorial conflicts, and markets for agricultural goods. If they are supposed to work, you might ask, why don’t they? Why are they overcrowded, polluted, infested with crime and whatnot?

You probably know that you can hurt yourself with a screwdriver. It certainly not the screwdriver’s fault, and it is not even always your fault. S**t happens, quite simply. When a lot of people do a lot of happening, s**t happens recurrently. It is called risk. At the aggregate scale risk is a tangible quantity of damage, not just a likelihood of damage taking place. This is why we store food for later and buy insurance policies. Dense human settlements mean lots of humans doing things very frequently in space and time, and that means more risk. We create social structures, and those structures work. This is how we survived. Those structures always have some flaws, and when we see it, we try to make some change.

My point is that collective intelligence means collective capacity to figure stuff out when we are at a loss as for what to do next. It does not mean coming up with perfect solutions. It means advancing one more step on a path that we have no idea where exactly it leads to. Scientifically, the concept is called adaptive walk in rugged landscape. There is a specific theoretical shade to it, namely that of conscious representation.

Accidents happen, and another one has just happened. I stumbled upon a video on You Tube, entitled ‘This Scientist Proves Why Our Reality Is False | Donald Hoffman on Conversations with Tom’( https://youtu.be/UJukJiNEl4o ), and I went after the man, i.e. after prof. Hoffman. Yes, guys, this is what I like doing. When I find someone with interesting ideas, I tend to sort of follow them home. One of my friends calls it ‘the bulldog state of mind’. Anyway, I went down this specific rabbit hole, and I found two articles: Hoffman et al. 2015[1] and Fields et al. 2018[2]. I owe professor Hoffman for giving me hope that I am not mad, when I use neural networks to represent collective intelligence. I owe him and his collaborators for giving some theoretic polish to my own work. I am like Moliere’s bourgeois turning into a gentleman: I suddenly realize what kind of prose I have been speaking about that topic. That prose is built around the concept of Markov chains, i.e. sequential states of reality where each consecutive state is the result of just the previous state, without exogenous corrections. The neural network I use is a Markovian kernel, i.e. a matrix (= a big table with numbers in it, to be simple) that transforms one Markov space into another.

As we talk about spaces, I feel like calling two other mathematical concepts, important for understanding the concept of Conscious Agents Networks (yes, the acronym is CAN), as developed by professor Hoffman. These concepts are: measurable space and σ-algebra. If I take a set of any phenomenal occurrences – chicken, airplanes, people, diamonds, numbers and whatnot – I can recombine that set by moving its elements around, and I can define subsets inside of it by cherry-picking some elements. All those possible transformations of the set X, together with the way of doing them and the rules of delimiting the set X out of its environment, all that makes the σ-algebra of the set X. The set X together with its σ-algebra is a measurable space.

Fields et al. 2018 represent conscious existence in the world as relation between three essential, measurable spaces: states of the world or W, conscious experiences thereof or X, and actions, designated as G. Each of these is a measurable space because it is a set of phenomena accompanied by all the possible transformations thereof. States of the world are a set, and this set can be recombined through its specific σ-algebra. The same holds for experiences and actions. Conscious existence consists in consciously experiencing states of the world and taking actions on the grounds of that experience.

That brings up an interesting consequence: conscious existence can be represented as a mathematical manifold of 7 dimensions. Why 7? It is simple. States of the world W, for one. Experiences X, two. Actions G, three. Perception is a combination of experiences with states of the world, right? Therefore, perception P is a Markovian kernel (i.e. a set of strings) attaching those two together and can be represented as P: W*X → X. That makes four dimensions. We go further. Decisions are a transformation of experiences into actions, or D: X*G → G. Yes, this is another Markovian kernel, and it is the 5-th dimension of conscious existence. The sixth one is the one that some people don’t like, i.e. the consequences of actions, thus a Markovian kernel that transforms actions into further states of the world, and spells A: G*W →W. All that happy family of phenomenological dimensions, i.e. W, X, G, P, D, A, needs another, seventh dimension to have any existence at all: they need time t. In the theory presented by Fields et al. 2018 , a Conscious Agent (CA) is precisely a 7-dimensional combination of W, X, G, P, D, A, and t.

That paper by Fields et al. 2018 made me understand that representing collective intelligence with neural networks involves deep theoretical assumptions about perception and consciousness. Neural networks are mathematical structures. In simpler words, they are combinations of symmetrical equations, asymmetrical inequalities and logical propositions linking them (such as ‘if… then…’). Those mathematical structures are divided into output variables and input variables. A combination of inputs should stay in a given relation, i.e. equality, superiority or inferiority to a pre-defined output. The output variable is precisely the tricky thing. The theoretical stream represented by Fields et al. 2018 , as well as by: He et al. 2015[3], Hoffman et al. 2015[4], Hoffman 2016[5] calls itself ‘Interface Theory of Perception’ (ITP) and assumes that the output of perception and consciousness consists in payoffs from environment. In other words, perception and consciousness are fitness functions, and organisms responsive only to fitness systematically outcompete those responsive to a veridical representation of reality, i.e. to truth about reality. In still other words, ITP stipulates that we live in a Matrix that we make by ourselves: we peg our attention on phenomena that give us payoffs and don’t give a s**t about all the rest.

Apparently, there is an important body of science which vigorously oppose the Interface Theory of Perception (see e.g. Trivers 2011[6]; Pizlo et al. 2014[7]), by claiming that human perception is fundamentally veridical, i.e. oriented on discovering the truth about reality.

In the middle of that theoretical clash, my question is: can I represent intelligent structures as Markov chains without endorsing the assumptions of ITP? In other words, can I assume that collective intelligence is a sequence of states, observable as sets of quantitative variables, and each such state is solely the outcome of the preceding state? I think it is possible, and, as I explore this particular question, I decided to connect with a review I am preparing right now, for a manuscript entitled ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’, submitted for publication in the International Journal of Energy Sector Management (ISSN1750-6220). As I am just a reviewer of this paper, I think I should disseminate its contents on my blog, and therefore I will break with the basic habit I have to provide a linked access to the sources I quote on my blog. I will be just discussing the paper in this update, with the hope of adding, by the means of review, as much scientific value to the initial manuscript as possible.

I refer to that paper because it uses a neural network, namely a Levenberg–Marquardt Backpropagation Network, for validating a model of interactions between economy, energy, and environment. I want to start my review from this point, namely from the essential logic of that neural network and its application to the problem studied in the paper I am reviewing. The usage of neural networks in social sciences is becoming a fashion, and I like going through the basic assumptions of this method once again, as it comes handy in connection with the Interface Theory of Perception which I have just passed cursorily through.

The manuscript ‘Evolutionary Analysis of a Four-dimensional Energy-Economy- Environment Dynamic System’ explores the general hypothesis that relations between energy, economy and the ecosystem are a self-regulating, complex ecosystem. In other words, this paper somehow assumes that human populations can somehow self-regulate themselves, although not necessarily in a perfect manner, as regards the balance between economic activity, consumption of energy, and environmental interactions.   

Neural networks can be used in social sciences in two essential ways. First of all, we can assume that ‘IT IS INTELLIGENT’, whatever ‘IT’ is or means. A neural network is supposed to represent the way IT IS INTELLIGENT. Second of all, we can use neural networks instead of classical stochastic models so as to find the best fitting values in the parameters ascribed to some variables. The difference between a stochastic method and a neural network, as regards nailing those parameters down, is in the way of reading and utilizing residual errors. We have ideas, right? As long as we keep them nicely inside our heads, those ideas look just great. Still, when we externalize those ideas, i.e. when we try and see how that stuff works in real life, then it usually hurts, at least a little. It hurts because reality is a bitch and does not want to curb down to our expectations. When it hurts, the local interaction of our grand ideas with reality generates a gap. Mathematically, that gap ‘Ideal expectations – reality’ is a local residual error.

Essentially, mathematical sciences consist in finding such logical, recurrent patterns in our thinking, which generate as little residual error as possible when confronted with reality. The Pythagorean theorem c2 = a2 + b2, the π number (yes, we read it ‘the pie number’) etc. – all that stuff consists of formalized ideas which hold in confrontation with reality, i.e. they generate very little error or no error at all. The classical way of nailing down those logical structures, i.e. the classical way of doing mathematics, consists in making a provisional estimation of what real life should look like according to our provisional math, then assessing all the local residual errors which inevitably appear as soon as we confront said real life, and, in a long sequence of consecutive steps, in progressively modifying our initial math so as it fits well to reality. We take all the errors we can find at once, and we calibrate our mathematical structure so as to minimize all those errors in the same time.

That was the classical approach. Mathematicians whom we read about in history books were dudes who would spend a lifetime at nailing down one single equation. With the emergence of simple digital tools for statistics, it has become a lot easier. With software like SPSS or Stata, you can essentially create your own equations, and, provided that you have relevant empirical data, you can quickly check their accuracy. The problem with that approach, which is already being labelled as classical stochastic, is that if an equation you come up with proves statistically inaccurate, i.e. it generates a lot of error, you sort of have to guess what other equation could fit better. That classic statistical software speeds up the testing, but not really the formulation of equations as such.

With the advent of artificial intelligence, things have changed even further. Each time you fire up a neural network, that thing essentially nails down new math. A neural network learns: it does the same thing that great mathematical minds used to do. Each time a neural network makes an error, it learns on that single error, and improves, producing a slightly different equation and so forth, until error becomes negligible. I noticed there is a recent fashion to use neural networks as tools for validating mathematical models, just as classical stochastic methods would be used, e.g. Ordinary Least Squares. Generally, that approach has some kind of bad methodological smell for me. A neural network can process the same empirical data that an Ordinary Least Squares processes, and the neural network can yield the same type of parameters as the OLS test, and yet the way those values are obtained is completely different. A neural network is intelligent, whilst an Ordinary Least Squares test (or any other statistical test) is not. What a neural network yields comes out of a process very similar to thinking. The result of a test is just a number.  

If someone says: ‘this neural network has validated my model’, I am always like: ‘Weeelll, I guess what this network has just done was to invent its own model, which you don’t really understand, on the basis of your model’. My point is that a neural network can optimize very nearly anything, yet the better a network optimizes, the more prone it is to overfitting, i.e. to being overly efficient at justifying a set of numbers which does not correspond to the true structure of the problem.

Validation of the model, and the use of neural network to that purpose leads me to the model itself, such as it is presented in the manuscript. This is a complex logical structure and, as this blog is supposed to serve the popularization of science, I am going to stop and study at length both the model and its connection with the neural network. First of all, the authors of that ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’ manuscript are non-descript to me. This is called blind review. Just in case I had some connection to them. Still, man, like really: if you want to conspire, do it well. Those authors technically remain anonymous, but right at the beginning of their paper they introduce a model, which, in order to be fully understood, requires referring to another paper, which the same authors quote: Zhao, L., & Otoo, C. O. A. (2019). Stability and Complexity of a Novel Three-Dimensional Environmental Quality Dynamic Evolution System. Complexity, 2019, https://doi.org/10.1155/2019/3941920 .

As I go through that referenced paper, I discover largely the same line of logic. Guys, if you want to remain anonymous, don’t send around your Instagram profile. I am pretty sure that ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’ is very largely a build-up the paper they quote, i.e. ‘Stability and Complexity of a Novel Three-Dimensional Environmental Quality Dynamic Evolution System’. It is the same method of validation, i.e. a Levenberg-Marquardt Backpropagation Network, with virtually the same empirical data, and almost identical a model.

Good. I followed my suspects home, I know who do they hang out with (i.e. with themselves), and now I can go back to their statement. Both papers, i.e. the one I am reviewing, and the one which serves as baseline, follow the same line of logic. The authors build a linear model of relations between economy, energy, and environment, with three main dependent variables: the volume x(t) of pollution emitted, the value of Gross Domestic Product y(t), and environmental quality z(t).

By the way, I can see that the authors need to get a bit more at home with macroeconomics. In their original writing, they use the expression ‘level of economic growth (GDP)’. As regards the Gross Domestic Product, you have either level or growth. Level means aggregate GDP, and growth means percentage change over time, like [GDP(t1) – GDP(t0)] / GDP(t0). As I try to figure out what exactly do those authors mean by ‘level of economic growth (GDP)’, I go through the empirical data they introduce as regards China and its economy. Under the heading y(t), i.e. the one I’m after, they present standardized values which start at y(2000) = 1,1085 in the year 2000, and reach y(2017) = 9,2297 in 2017. Whatever the authors have in mind, aggregate GDP or its rate of growth, that thing had changed by 9,2297/1,1085 = 8,32 times between 2000 and 2017.

I go and check with the World Bank. The aggregate GDP of Cina, measured in constant 2010 US$, made $2 232 billion in 2000, and  $10 131,9 billion in 2017. This is a change by 4,54 times, thus much less than the standardized change in y(t) that the authors present. I check with the rate of real growth in GDP. In 2000, the Chinese economic growth was 8,5%, and in 2017 it yields 6,8%, which gives a change by (6,8/8,5) = 0,8 times and is, once again, far from the standardized 3,06 times provided by the authors. I checked with 2 other possible measures of GDP: in current US$, and in current international $ PPP. The latter indicator provides values for gross domestic product (GDP) expressed in current international dollars, converted by purchasing power parity (PPP) conversion factor. The first of the two yielded a 10,02 times growth in GDP, in China, from 2000 to 2017. The latter gives 5,31 times growth.

Good. I conclude that the authors used some kind of nominal GDP in their data, calculated with internal inflation in the Chinese economy. That could be a serious drawback, as regards the model they develop. This is supposed to be research on the mutual balance between economy, ecosystems, and energy. In this context, economy should be measured in terms of real output, thus after having shaven off inflation. Using nominal GDP is a methodological mistake.

What the hell, I go further into the model. This is a model based on differentials, thus on local gradients of change. The (allegedly) anonymous authors of the ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’ manuscript refer their model, without giving much of an explanation, to that presented in: Zhao, L., & Otoo, C. O. A. (2019). Stability and Complexity of a Novel Three-Dimensional Environmental Quality Dynamic Evolution System. Complexity, 2019. I need to cross reference those two models in order to make sense of it.

The chronologically earlier model in: Zhao, L., & Otoo, C. O. A. (2019). Stability and Complexity of a Novel Three-Dimensional Environmental Quality Dynamic Evolution System. Complexity, 2019 operates within something that the authors call ‘economic cycle’ (to be dug in seriously, ‘cause, man, the theory of economic cycles is like a separate planet in the galaxy of social sciences), and introduces 4 essential, independent variables computed as peak values inside the economic cycle. They are:

>>> F stands for peak value in the volume of pollution,

>>> E represents the peak of GDP (once again, the authors write about ‘economic growth’, yet there is no way it could possibly be economic growth, it has to be aggregate GDP),

>>> H stands for ‘peak value of the impact of economic growth 𝑦(𝑡) on environmental quality 𝑧(𝑡)’, and, finally,

>>> P is the maximum volume of pollution possible to absorb by the ecosystem.

With those independent peak values in the system, that baseline model focuses on computing first-order derivatives of, respectively x(t), y(t) and z(t) over time. In other words, what the authors are after is change over time, noted as, respectively d[x(t)]/ d(t), d[y(t)]/d(t), and d[z(t)]/d(t).

The formal notation of the model is given in a triad of equations:  

d[x(t)]/d(t) = a1*x*[1 – (x/F)] + a2*y*[1 – (y/E)] – a3*z

d[y(t)]/d(t) = -b1*x – b2*y – b3*z

d[z(t)]/d(t) = -c1*x + c2*y*[1 – (y/H)] + c3*z*[(x/P) – 1]

Good. This is the baseline model presented in: Zhao, L., & Otoo, C. O. A. (2019). Stability and Complexity of a Novel Three-Dimensional Environmental Quality Dynamic Evolution System. Complexity, 2019. I am going to comment on it, and then I present the extension to that model, which the paper under review, i.e. ‘Evolutionary Analysis of a Four-dimensional Energy- Economy- Environment Dynamic System’ introduces as theoretical value added.

Thus, I comment. The model generally assumes two things. Firstly, gradients of change in pollution x(t), real output y(t), and environmental quality z(t) are sums of fractions taken out of stationary states. It is like saying: the pace at which this child will grow will be a fraction of its bodyweight plus a fraction of the difference between their current height and their tallest relative’s physical height etc. This is a computational trick more than solid theory. In statistics, when we study empirical data pertinent to economics or finance, we frequently have things like non-stationarity (i.e. a trend of change or a cycle of change) in some variables, very different scales of measurement etc. One way out of that is to do regression on the natural logarithms of that data (logarithms flatten out whatever needs to be flattened), or first derivatives over time (i.e. growth rates). It usually works, i.e. logarithms or first moments of original data yield better accuracy in linear regression than original data itself. Still, it is a computational trick, which can help validate a theory, not a theory as such. To my knowledge, there is no theory to postulate that the gradient of change in the volume of pollution d[x(t)]/d(t) is a sum of fractions resulting from the current economic output or the peak possible pollution in the economic cycle. Even if we assume that relations between energy, economy and environment in a human society are a complex, self-organizing system, that system is supposed to work through interaction, not through the addition of growth rates.

I need to wrap my mind a bit more around those equations, and here comes another assumption I can see in that model. It assumes that the pace of change in output, pollution and environmental quality depends on intra-cyclical peaks in those variables. You know, those F, E, H and P peaks, which I mentioned earlier. Somehow, I don’t follow this logic. The peak of any process depends on the cumulative rates of change rather that the other way around. Besides, if I assume any kind of attractor in a stochastic process, it would be rather the mean-reverted value, and not really the local maximum.

I can see that reviewing that manuscript will be tons of fun, intellectually. I like it. For the time being, I am posting those uncombed thoughts of mine on my blog, and I keep thinking.

Discover Social Sciences is a scientific blog, which I, Krzysztof Wasniewski, individually write and manage. If you enjoy the content I create, you can choose to support my work, with a symbolic $1, or whatever other amount you please, via MY PAYPAL ACCOUNT.  What you will contribute to will be almost exactly what you can read now. I have been blogging since 2017, and I think I have a pretty clearly rounded style.

In the bottom on the sidebar of the main page, you can access the archives of that blog, all the way back to August 2017. You can make yourself an idea how I work, what do I work on and how has my writing evolved. If you like social sciences served in this specific sauce, I will be grateful for your support to my research and writing.

‘Discover Social Sciences’ is a continuous endeavour and is mostly made of my personal energy and work. There are minor expenses, to cover the current costs of maintaining the website, or to collect data, yet I want to be honest: by supporting ‘Discover Social Sciences’, you will be mostly supporting my continuous stream of writing and online publishing. As you read through the stream of my updates on https://discoversocialsciences.com , you can see that I usually write 1 – 3 updates a week, and this is the pace of writing that you can expect from me.

Besides the continuous stream of writing which I provide to my readers, there are some more durable takeaways. One of them is an e-book which I published in 2017, ‘Capitalism And Political Power’. Normally, it is available with the publisher, the Scholar publishing house (https://scholar.com.pl/en/economics/1703-capitalism-and-political-power.html?search_query=Wasniewski&results=2 ). Via https://discoversocialsciences.com , you can download that e-book for free.

Another takeaway you can be interested in is ‘The Business Planning Calculator’, an Excel-based, simple tool for financial calculations needed when building a business plan.

Both the e-book and the calculator are available via links in the top right corner of the main page on https://discoversocialsciences.com .

You might be interested Virtual Summer Camps, as well. These are free, half-day summer camps will be a week-long, with enrichment-based classes in subjects like foreign languages, chess, theater, coding, Minecraft, how to be a detective, photography and more. These live, interactive classes will be taught by expert instructors vetted through Varsity Tutors’ platform. We already have 200 camps scheduled for the summer.   https://www.varsitytutors.com/virtual-summer-camps .


[1] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[2] Fields, C., Hoffman, D. D., Prakash, C., & Singh, M. (2018). Conscious agent networks: Formal analysis and application to cognition. Cognitive Systems Research, 47, 186-213. https://doi.org/10.1016/j.cogsys.2017.10.003

[3] He, X., Feldman, J., & Singh, M. (2015). Structure from motion without projective consistency. Journal of Vision, 15, 725.

[4] Hoffman, D. D., Singh, M., & Prakash, C. (2015). The interface theory of perception. Psychonomic bulletin & review, 22(6), 1480-1506.

[5] Hoffman, D. D. (2016). The interface theory of perception. Current Directions in Psychological Science, 25, 157–161

[6] Trivers, R. L. (2011). The folly of fools. New York: Basic Books

[7] Pizlo, Z., Li, Y., Sawada, T., & Steinman, R. M. (2014). Making a machine that sees like us. New York: Oxford University Press.