Pardon my French, but the thing is really intelligent

My editorial on You Tube

And so I am meddling with neural networks. It had to come. It just had to. I started with me having many ideas to develop at once. Routine stuff with me. Then, the Editor-in-Chief of the ‘Energy Economics’ journal returned my manuscript of article on the energy-efficiency of national economies, which I had submitted with them, with a general remark that I should work both on the clarity of my hypotheses, and on the scientific spin of my empirical research. In short, Mr Wasniewski, linear models tested with Ordinary Least Squares is a bit oldie, if you catch my drift. Bloody right, Mr Editor-In-Chief. Basically, I agree with your remarks. I need to move out of my cavern, towards the light of progress, and get acquainted with the latest fashion. The latest fashion we are wearing this season is artificial intelligence, machine learning, and neural networks.

It comes handy, to the extent that I obsessively meddle with the issue of collective intelligence, and am dreaming about creating a model of human social structure acting as collective intelligence, sort of a beehive. Whilst the casting for a queen in that hive remains open, and is likely to stay this way for a while, I am digging into the very basics of neural networks. I am looking in the Python department, as I have already got a bit familiar with that environment. I found an article by James Loy, entitled “How to build your own Neural Network from scratch in Python”. The article looks a bit like sourcing from another one, available at the website of ProBytes Software, thus I use both to develop my understanding. I pasted the whole algorithm by James Loy into my Python Shell, made in run with an ‘enter’, and I am waiting for what it is going to produce. In the meantime, I am being verbal about my understanding.

The author declares he wants to do more or less the same thing that I, namely to understand neural networks. He constructs a simple algorithm for a neural network. It starts with defining the neural network as a class, i.e. as a callable object that acts as a factory for new instances of itself. In the neural network defined as a class, that algorithm starts with calling the constructor function ‘_init_’, which constructs an instance ‘self’ of that class. It goes like ‘def __init__(self, x, y):’. In other words, the class ‘Neural network’ generates instances ‘self’ of itself, and each instance is essentially made of two variables: input x, and output y. The ‘x’ is declared as input variable through the ‘self.input = x’ expression. Then, the output of the network is defined in two steps. Yes, the ‘y’ is generally the output, only in a neural network, we want the network to predict a value of ‘y’, thus some kind of y^. What we have to do is to define ‘self.y = y’, feed the real x-s and the real y-s into the network, and expect the latter to turn out some y^-s.

Logically, we need to prepare a vessel for holding the y^-s. The vessel is defined as ‘self.output = np.zeros(y.shape)’. The ‘shape’ function defines a tuple – a table, for those mildly fond of maths – with given dimensions. What are the dimensions of ‘y’ in that ‘y.shape’? They have been given earlier, as the weights of the network were being defined. It goes as follows. It starts, thus, right after the ‘self.input = x’ has been said, ‘self.weights1 = np.random.rand(self.input.shape[1],4)’ fires off, closely followed by ‘self.weights2 =  np.random.rand(4,1)’. All in all, the entire class of ‘Neural network’ is defined in the following form:

class NeuralNetwork:

    def __init__(self, x, y):

        self.input      = x

        self.weights1   = np.random.rand(self.input.shape[1],4)

        self.weights2   = np.random.rand(4,1)                

        self.y          = y

        self.output     = np.zeros(self.y.shape)                

The output of each instance in that neural network is a two-dimensional tuple (table) made of one row (I hope I got it correctly), and four columns. Initially, it is filled with zeros, so as to make room for something more meaningful. The predicted y^-s are supposed to jump into those empty sockets, held ready by the zeros. The ‘random.rand’ expression, associated with ‘weights’ means that the network is supposed to assign randomly different levels of importance to different x-s fed into it.

Anyway, the next step is to instruct my snake (i.e. Python) what to do next, with that class ‘Neural Network’. It is supposed to do two things: feed data forward, i.e. makes those neurons work on predicting the y^-s, and then check itself by an operation called backpropagation of errors. The latter consists in comparing the predicted y^-s with the real y-s, measuring the discrepancy as a loss of information, updating the initial random weights with conclusions from that measurement, and do it all again, and again, and again, until the error runs down to very low values. The weights applied by the network in order to generate that lowest possible error are the best the network can do in terms of learning.

The feeding forward of predicted y^-s goes on in two steps, or in two layers of neurons, one hidden, and one final. They are defined as:

def feedforward(self):

        self.layer1 = sigmoid(, self.weights1))

        self.output = sigmoid(, self.weights2))

The ‘sigmoid’ part means sigmoid function, AKA logistic function, expressed as y=1/(1+e-x), where, at the end of the day, the y always falls somewhere between 0 and 1, and the ‘x’ is not really the empirical, real ‘x’, but the ‘x’ multiplied by a weight, ranging between 0 and 1 as well. The sigmoid function is good for testing the weights we apply to various types of input x-es. Whatever kind of data you take: populations measured in millions, or consumption of energy per capita, measured in kilograms of oil equivalent, the basic sigmoid function y=1/(1+e-x), will always yield a value between 0 and 1. This function essentially normalizes any data.

Now, I want to take differentiated data, like population as headcount, energy consumption in them kilograms of whatever oil equals to, and the supply of money in standardized US dollars. Quite a mix of units and scales of measurement. I label those three as, respectively, xa, xb, and xc. I assign them weights ranging between 0 and 1, so as the sum of weights never exceeds 1. In plain language it means that for every vector of observations made of xa, xb, and xc I take a pinchful of  xa, then a zest of xb, and a spoon of xc. I make them into x = wa*xa + wb*xb + wc*xc, I give it a minus sign and put it as an exponent for the Euler’s constant.

That yields y=1/(1+e-( wa*xa + wb*xb + wc*xc)). Long, but meaningful to the extent that now, my y is always to find somewhere between 0 and 1, and I can experiment with various weights for my various shades of x, and look what it gives in terms of y.

In the algorithm above, the ‘’ function conveys the idea of weighing our x-s. With two dimensions, like the input signal ‘x’ and its weight ‘w’, the ‘’ function yields a multiplication of those two one-dimensional matrices, exactly in the x = wa*xa + wb*xb + wc*xc drift.

Thus, the first really smart layer of the network, the hidden one, takes the empirical x-s, weighs them with random weights, and makes a sigmoid of that. The next layer, the output one, takes the sigmoid-calculated values from the hidden layer, and applies the same operation to them.

One more remark about the sigmoid. You can put something else instead of 1, in the nominator. Then, the sigmoid will yield your data normalized over that something. If you have a process that tends towards a level of saturation, e.g. number of psilocybin parties per month, you can put that level in the nominator. On the top of that, you can add parameters to the denominator. In other words, you can replace the 1+e-x with ‘b + e-k*x’, where b and k can be whatever seems to make sense for you. With that specific spin, the sigmoid is good for simulating anything that tends towards saturation over time. Depending on the parameters in denominator, the shape of the corresponding curve will change. Usually, ‘b’ works well when taken as a fraction of the nominator (the saturation level), and the ‘k’ seems to be behaving meaningfully when comprised between 0 and 1.

I return to the algorithm. Now, as the network has generated a set of predicted y^-s, it is time to compare them to the actual y-s, and to evaluate how much is there to learn yet. We can use any measure of error, still, most frequently, them algorithms go after the simplest one, namely the Mean Square Error MSE = [(y1 – y^1)2 + (y2 – y^2)2 + … + (yn – y^n)2]0,5. Yes, it is Euclidean distance between the set of actual y-s and that of predicted y^-s. Yes, it is also the standard deviation of predicted y^-s from the actual distribution of empirical y-s.

In this precise algorithm, the author goes down another avenue: he takes the actual differences between observed y-s and predicted y^-s, and then multiplies it by the sigmoid derivative of predicted y^-s. Then he takes the transpose of a uni-dimensional matrix of those (y – y^)*(y^)’ with (y^)’ standing for derivative. It goes like:

    def backprop(self):

        # application of the chain rule to find derivative of the loss function with respect to weights2 and weights1

        d_weights2 =, (2*(self.y – self.output) * sigmoid_derivative(self.output)))

        d_weights1 =,  (*(self.y – self.output) * sigmoid_derivative(self.output), self.weights2.T) * sigmoid_derivative(self.layer1)))

        # update the weights with the derivative (slope) of the loss function

        self.weights1 += d_weights1

        self.weights2 += d_weights2

    def sigmoid(x):

    return 1.0/(1+ np.exp(-x))

    def sigmoid_derivative(x):

     return x * (1.0 – x)

I am still trying to wrap my mind around the reasons for taking this specific approach to the backpropagation of errors. The derivative of a sigmoid y=1/(1+e-x) is y’ =  [1/(1+e-x)]*{1 – [1/(1+e-x)]} and, as any derivative, it measures the slope of change in y. When I do (y1 – y^1)*(y^1)’ + (y2 – y^2)*(y^2)’ + … + (yn – y^n)*(y^n)’ it is as if I were taking some kind of weighted average. That weighted average can be understood in two alternative ways. Either it is standard deviation of y^ from y, weighted with the local slopes, or it is a general slope weighted with local deviations. Now I take the transpose of a matrix like {(y1 – y^1)*(y^1)’ ; (y2 – y^2)*(y^2)’ ; … (yn – y^n)*(y^n)’}, it is a bit as if I made a matrix of inverted terms, i.e. 1/[(yn – y^n)*(y^n)’]. Now, I make a ‘.dot’ product of those inverted terms, so I multiply them by each other. Then, I feed the ‘.dot’ product into the neural network with the ‘+=’ operator. The latter means that in the next round of calculations, the network can do whatever it wants with those terms. Hmmweeellyyeess, makes some sense. I don’t know what exact sense is that, but it has some mathematical charm.

Now, I try to apply the same logic to the data I am working with in my research. Just to give you an idea, I show some data for just one country: Australia. Why Australia? Honestly, I don’t see why it shouldn’t be. Quite a respectable place. Anyway, here is that table. GDP per unit of energy consumed can be considered as the target output variable y, and the rest are those x-s.

Table 1 – Selected data regarding Australia

Year GDP per unit of energy use (constant 2011 PPP $ per kg of oil equivalent) Share of aggregate amortization in the GDP Supply of broad money, % of GDP Energy use (tons of oil equivalent per capita) Urban population as % of total population GDP per capita, ‘000 USD
  y X1 X2 X3 X4 X5
1990 5,662020744 14,46 54,146 5,062 85,4 26,768
1991 5,719765048 14,806 53,369 4,928 85,4 26,496
1992 5,639817305 14,865 56,208 4,959 85,566 27,234
1993 5,597913126 15,277 56,61 5,148 85,748 28,082
1994 5,824685357 15,62 59,227 5,09 85,928 29,295
1995 5,929177604 15,895 60,519 5,129 86,106 30,489
1996 5,780817973 15,431 62,734 5,394 86,283 31,566
1997 5,860645225 15,259 63,981 5,47 86,504 32,709
1998 5,973528571 15,352 65,591 5,554 86,727 33,789
1999 6,139349354 15,086 69,539 5,61 86,947 35,139
2000 6,268129418 14,5 67,72 5,644 87,165 35,35
2001 6,531818805 14,041 70,382 5,447 87,378 36,297
2002 6,563073754 13,609 70,518 5,57 87,541 37,047
2003 6,677186947 13,398 74,818 5,569 87,695 38,302
2004 6,82834791 13,582 77,495 5,598 87,849 39,134
2005 6,99630318 13,737 78,556 5,564 88 39,914
2006 6,908872246 14,116 83,538 5,709 88,15 41,032
2007 6,932137612 14,025 90,679 5,868 88,298 42,022
2008 6,929395465 13,449 97,866 5,965 88,445 42,222
2009 7,039061961 13,698 94,542 5,863 88,59 41,616
2010 7,157467568 12,647 101,042 5,649 88,733 43,155
2011 7,291989544 12,489 100,349 5,638 88,875 43,716
2012 7,671605162 13,071 101,852 5,559 89,015 43,151
2013 7,891026044 13,455 106,347 5,586 89,153 43,238
2014 8,172929207 13,793 109,502 5,485 89,289 43,071

In his article, James Loy reports the cumulative error over 1500 iterations of training, with just four series of x-s, made of four observations. I do something else. I am interested in how the network works, step by step. I do step-by-step calculations with data from that table, following that algorithm I have just discussed. I do it in Excel, and I observe the way that the network behaves. I can see that the hidden layer is really hidden, to the extent that it does not produce much in terms of meaningful information. What really spins is the output layer, thus, in fact, the connection between the hidden layer and the output. In the hidden layer, all the predicted sigmoid y^ are equal to 1, and their derivatives are automatically 0. Still, in the output layer, when the second random distribution of weights overlaps with the first one from the hidden layer. Then, for some years, those output sigmoids demonstrate tiny differences from 1, and their derivatives become very small positive numbers. As a result, tiny, local (yi – y^i)*(y^i)’ expressions are being generated in the output layer, and they modify the initial weights in the next round of training.

I observe the cumulative error (loss) in the first four iterations. In the first one it is 0,003138796, the second round brings 0,000100228, the third round displays 0,0000143, and the fourth one 0,005997739. Looks like an initial reduction of cumulative error, by one order of magnitude at each iteration, and then, in the fourth round, it jumps up to the highest cumulative error of the four. I extend the number to those hand-driven iterations from four to six, and I keep feeding the network with random weights, again and again. A pattern emerges. The cumulative error oscillates. Sometimes the network drives it down, sometimes it swings it up.

F**k! Pardon my French, but just six iterations of that algorithm show me that the thing is really intelligent. It generates an error, it drives it down to a lower value, and then, as if it was somehow dysfunctional to jump to conclusions that quickly, it generates a greater error in consecutive steps, as if it was considering more alternative options. I know that data scientists, should they read this, can slap their thighs at that elderly uncle (i.e. me), fascinated with how a neural network behaves. Still, for me, it is science. I take my data, I feed it into a machine that I see for the first time in my life, and I observe intelligent behaviour in something written on less than one page. It experiments with weights attributed to the stimuli I feed into it, and it evaluates its own error.

Now, I understand why that scientist from MIT, Lex Fridman, says that building artificial intelligence brings insights into how the human brain works.

I am consistently delivering good, almost new science to my readers, and love doing it, and I am working on crowdfunding this activity of mine. As we talk business plans, I remind you that you can download, from the library of my blog, the business plan I prepared for my semi-scientific project Befund  (and you can access the French version as well). You can also get a free e-copy of my book ‘Capitalism and Political Power’ You can support my research by donating directly, any amount you consider appropriate, to my PayPal account. You can also consider going to my Patreon page and become my patron. If you decide so, I will be grateful for suggesting me two things that Patreon suggests me to suggest you. Firstly, what kind of reward would you expect in exchange of supporting me? Secondly, what kind of phases would you like to see in the development of my research, and of the corresponding educational tools?

Experimenting with new structures

My editorial

I return to doing my review of literature in order to find accurate distinctions as for historical changes. Besides those articles on the history of technology as such, I found an interesting paper by Bert J.M. de Vries, , Detlef P. van Vuuren, and Monique M. Hoogwijk (de Vries et al. 2007[1]). It is interesting because the authors attempted to assess the potential for producing renewable energies for the first half of the 21st century, and they did it in 2007, so exactly at the moment when, according to my calculations, something really changed in the sector of renewable energies and it started developing significantly faster than before. It is interesting to see how did other researchers see the future at a moment, which is already the past, and quite a significant bit of past. As usually, I start from the end. The kitchen door I can poke my head through, regarding this paper, is Appendix B, presenting in a concise, tabular form, the so-called four scenarios developed by the authors. So I rummage through those scenarios, and one thing instantaneously jumps to my eye: the absence of the solar-thermal technology. Today, the solar-thermal is probably the most powerful technology for producing renewable energy, yet, when this paper was being written, so not later than in 2007, the solar-thermal technology was still in its infancy. It was precisely in 2007 that the World Bank declassified and published first reports about the big Ouarzazate project in Morocco, and only in 2011 it became more or less clear how to finance those projects. As I am engaged in research on renewable energies, I will probably mention that solar-thermal technology more than once, and for now I am content with noticing that solar-thermal changed a lot in the way we perceive the future of renewable energies.

Returning to this paper by Bert J.M. de Vries, , Detlef P. van Vuuren, and Monique M. Hoogwijk, and to their scenarios, I see four basic technologies being taken into account: wind, solar photovoltaic, biomass electric, and liquid biomass. As a matter of fact, we have another technology, besides the solar-thermal, which strikes by its absence at this precise party: the hydro, in its many shades. Right, probably the authors did not have access to the relevant data. I continue studying this table in appendix B, and what strikes me as odd is that each technology is described with a different set of quantitative variables. It is easy to see that the authors had hard times to drive different technologies to common denominators. As I stop clinging to that Appendix B and browse through the whole paper, I can see a lot of maths but very few numbers. It is hard to say, what was exactly the empirical basis for the scenarios presented. On the whole, this paper by de Vries et al. is interesting as for some methodological insights, but a little disappointing on the whole. I can see nothing that looks like real discovery.

And so I am drifting towards other sources, and I come by an interesting author, David Edgerton and his book ‘Shock of the old: Technology and global history since 1900’ (Edgerton 2011[2]). I am fishing the most interesting bits out of this study, and I find some insights worth stopping by and think. First of all, David Edgerton points out that we commonly live in an illusion of constant technological progress, i.e. of a process, which consistently brings improvement in the conditions of living, as technology changes. Edgerton shows, quite convincingly, that technological change is not necessarily to put at equality with technological progress. According to his findings, there were just a few periods of real technological progress since 1900: between 1900 and 1913, followed by another between 1950 and 1973. Save for those short windows in time, the link between technological change and the conditions of living is really ambiguous.

In Section 4 of his book, entitled ‘Maintenance’, David Edgerton brings forth that intuition that I very much share: that the way we deal with both the physical wear and tear of our technology, and with its moral obsolescence, is deeply significant to our overall well-being due to technology. There is that interesting thing I noticed in Chinese cities, this summer: Chinese people do hardly any renovation in buildings. Save for those occasions when a store is being replaced by another and the facade needs some obvious patching, buildings in China seem to live a life similar to that of clothes: when they are used up, they are being demolished and something new is being built in the same location (they do put a lot of effort into maintaining infrastructure, mind you). In Europe, we are almost obsessively attached to maintaining our buildings in good condition, and we spend a lot on renovation. Two cultures, two completely different roles of maintenance in the construction business. At this point, the research presented by David Edgerton gets strong support in earlier work, by the eminent French historian Fernand Braudel (Braudel 1981[3], 1983[4]) : real technological revolutions happened, in many fields of human ingenuity, when the technology in place allowed providing for more than simple maintenance. Both authors (Edgerton and Braudel) point on a turning point in the history of agriculture, probably by the end of the 19th century, when the technology of farming allowed to spend relatively less effort on rebuilding the fertile power of the soil.

Another great insight to find with David Edgerton is the importance of killing in technological change. Warfare, hunting, whaling, combatting pathogens, eradicating parasitic species from our agricultural structures – it all means killing, and it apparently has had a tremendous impact on technological change in the 20th century. This dramatic and deep reference to one of our fundamental functions as organisms (yes, my dear vegans, we all kill someone or something, even without knowing we do) makes me think about this elaborate metaphor of struggling civilisations, to find in Arnold Toynbee’s ‘Study of History’[5]. Development of societies is not an easy game: it is struggle, risk, sometimes stagnation. We, in our civilisation, as it is now, at the beginning of the 21st century, have no grounds to assume we are different than those who struggled before us. Arnold Toynbee coined this metaphor of struggle in an even broader context: how can we distinguish between civilisations? His point was that criteria like race, skin colour or even religious beliefs are misleading in classifying cultures and societies. To him, the real distinction pertained to the state of the struggle, so to say: is there a visible breakthrough in the past of the given social group, a kind of dramatic threshold which made those people take a completely different turn? If the answer is ‘yes’, then we could very well have two different civilisations, before and after that pivotal point in time. If no, we have, in fact, the same cultural paradigm, just dressed differently according to the current fashion.

All the review of literature about the history of technology, and about renewable energies, makes me ask once more that fundamental question: what exactly happened in 2007 – 2008, when the market of renewable energies suddenly moved its lazy ass and started growing much faster than before? Hypothesis #1: it was mostly technological a change, possibly connected to the banalisation of photovoltaic modules, or to the concurrent emergence of the solar-thermal technology. Hypothesis #2, just a shade away from #1 is that we are talking about technological change in the immediate vicinity of renewable sources of energy. Maybe something about electronics, nanotechnology, transport? Here, although the hypothesis is really interesting, I am very much at a loss. I will have to do some book-worming about it. Hypothesis #3: what happened in 2007 – 2008 was mostly social a change. Something shifted in our patterns of being together in society, which made people turn towards renewable energies more than before. That something could pertain to urbanisation, density of population, food deficit, velocity of money and whatnot. I have already done a lot of empirical testing in this respect.

Now, I am shifting my focus slightly, onto my personal experience in innovating, namely in learning the Python. As I have been wrestling with the Python, those last four days, I became aware of a pattern in my behaviour: I start with trying to make the new technology do exactly the same things that the old technology used to do for me, then I progressively become aware that I need to learn new skills in order to get those results (so I have that strange connection: new technology, old expectations regarding the outcomes, new skills to learn), and I also become aware that I need to reshuffle my resources, namely information, so as to squeeze it into new structures. Now, I am advancing one step further and I try to go the other way round: I had a look at what the logical structures of the Python look like and what I am trying to do now is to discover what can possibly the Python do for me. In other words, I am progressively passing from using Python as just a harder-to-tame version of the Excel towards discovering what it can do for me.

I am playing with one of the first things I found in the ‘Tutorial’ textbook, available at, namely with the structure ‘for n in range(x, y):’ and then you specify what do you want the Python to do while n stays in a given range. Here below is a simple try from my part:

>>> for x in range(1, 10):

            structure={‘x’:(x**2, x**0,5)}











# Honestly, I don’t even know why I formed this precise function. It just came like that. I imagined a structure when a cardinal order is established, from 1 to 10, and each point in the order has two coordinates derived from its position: square power and square root. It is nice and it didn’t exactly work as I planned. I got a ten-time repeated [‘x’]. As I put that ‘x’ in single quotes, it has been considered as a word, or a piece of content. Now, I modify slightly my structure: I took off those single quotes. Here is the result: 

>>> for x in range(1, 10):

            structure={x:{x**2, x**0,5}}











#Better a small step forward than no steps at all, as they say. After having removed those single quotes from around the x, I got a cardinal order in response. Still, the function did not yield the coordinates I expected to have out of it. Now, I replace the command ‘list()’ by the command ‘print()’. So far, I did not quite grasp the distinction between those two. I decided to experiment in order to acquire some knowledge, and here comes what I got:

>>> for x in range(1, 10):

            structure={x:{x**2, x**0,5}}


{1: {1, 5}}

{2: {1, 4, 5}}

{3: {9, 5, 1}}

{4: {16, 1, 5}}

{5: {25, 5, 1}}

{6: {1, 36, 5}}

{7: {49, 5, 1}}

{8: {64, 1, 5}}

{9: {81, 5, 1}}

#Now, this is most interesting. As I replaced ‘list()’ with ‘print()’, the function finally yielded the coordinates. There is just one tiny little problem: instead of having just two coordinates, in 8 cases on 9, starting from the cardinal 1, I have three coordinates instead of two. Why? Oh, I see. I put ‘0,5’ instead of ‘0.5’. For the Python, decimal values are accepted as such if the decimal point is literally the point, not the coma.  The expression ‘x**0,5’, which I meant to be x power 0,5, has been read by the Python as x power 0, followed by a 5. OK, I understand. My mistake. I replace the ‘0,5’ by ‘0.5’, by the book, and I launch it again. 

>>> for x in range(1, 10):

            structure={x:{x**2, x**0.5}}


{1: {1}}

{2: {1.4142135623730951, 4}}

{3: {9, 1.7320508075688772}}

{4: {16, 2.0}}

{5: {25, 2.23606797749979}}

{6: {2.449489742783178, 36}}

{7: {49, 2.6457513110645907}}

{8: {64, 2.8284271247461903}}

{9: {81, 3.0}}

#Finally, it worked. Now, I push my experiment further and I introduce a third coordinate, the natural logarithm of the cardinal position.  

>>> import math   #I turn on the module with mathematical functions

>>> for x in range(1, 10):

            structure={x:{x**2, x**0.5, math.log(x)}}


{1: {0.0, 1}}

{2: {0.6931471805599453, 1.4142135623730951, 4}}

{3: {9, 1.0986122886681098, 1.7320508075688772}}

{4: {16, 1.3862943611198906, 2.0}}

{5: {25, 2.23606797749979, 1.6094379124341003}}

{6: {1.791759469228055, 2.449489742783178, 36}}

{7: {49, 2.6457513110645907, 1.9459101490553132}}

{8: {64, 2.8284271247461903, 2.0794415416798357}}

{9: {81, 2.1972245773362196, 3.0}}

Good. It seems to work. What I have just succeeded to do is to learn a typical, very simple structure in Python and this structure does something slightly different than the Excel: it generates a sequence of specific structures out of a general, logical structure I specified. I am wrapping up my learning from my learning: it took me four days to start looking actively for new possibilities offered by the Python. If I extrapolate to the scale of collective behaviour, we have those four patterns of innovating: a) trying to obtain old outcomes with the new technology b) making mistakes, seeing my efficiency plummeting, and learning new skills c) rearranging my resources for the new technology d) experimenting and exploring the new possibilities, which come with the new technology. As I refer this generalized account of my individual experience to the literature I quoted a few paragraphs earlier. How does a breakthrough occur in these specific patterns of behaviour? I can assume there is a critical amount of learning and adaptation, required in the presence of a new technology, which can possibly, referring once more to Arnold Toynbee’s metaphor of struggling civilisations, make a technological transition risky, impossible or null in its balanced outcomes.

[1] de Vries, Bert J.M., van Vuuren, Detlef P., Hoogwijk Monique M., 2007, Renewable energy sources: Their global potential for the first-half of the 21st century at a global level: An integrated approach, Energy Policy, vol. 35 (2007), pp. 2590–2610

[2] Edgerton, D. (2011). Shock of the old: Technology and global history since 1900. Profile books

[3] Braudel, F., 1981, Civilization and Capitalism, Vol. I: The Structures of Everyday Life, rev.ed., English Translation, William Collins Sons & Co London and Harper & Row New York, ISBN 00216303 9

[4] Braudel, F., 1983, Civilisation and Capitalism. Part II: The Wheels of Commerce, trans. Sian Reynolds, Book Club Associates, William Collins Sons & Co,

[5] Toynbee, J. Arnold. Study of history. University press, 1946, pp. 69

Je réorganise mes ressources pour le Python

Mon éditorial

Me voilà qui continue cette expérience étrange et rafraîchissante de faire de la recherche sur le changement technologique en même temps que je fais du changement technologique en moi-même, c’est-à-dire en apprenant le Python, une langue de programmation à la mode dans les endroits fréquentés par des types comme moi. Ces endroits sont des universités, des conférences, des bibliothèques etc. Vous voyez le genre. Durant les deux derniers jours j’ai déjà découvert qu’en appréhendant cette technologie nouvelle (nouvelle pour moi, je veux dire), j’avais commencé, intuitivement, par utiliser le Python pour qu’il fasse exactement la même chose que fait Excel ou mon logiciel d’analyse statistique, Wizard for MacOS. Je m’étais donc appliqué à bâtir une base de données comme je la vois, donc comme une table. En Python, les données sont organisées comme structures logiques et non pas graphiques, donc mes efforts avaient été vains dans une large mesure. Je commettais beaucoup d’erreurs, parfois stupides. J’avais besoin d’un nombre surprenant d’essais pour acquérir une capacité acceptablement intuitive de distinguer entre les structures logiques validées et celles qui ne le sont pas. Ça m’avait aussi pris un temps déconcertement long pour faire la distinction entre des structures logiques avec des mots et des symboles dedans – ces soi-disant « strings » – et les structures qui contiennent des nombres. Pour les non-initiés, une série de noms en Excel, ça se présente comme une colonne ou bien un vers, donc comme :


   …ou bien comme

Mot#1 Mot#2 Mot#3

…tandis qu’en Python ce serait plutôt :

>>> Mots=[‘Mot#1’, ‘Mot#2’, ‘Mot#3’]

… donc chaque mot entre des apostrophes (qui sont des marques de citation de base en anglais), les mots séparés par des virgules et tout ça compris entre des parenthèses carrées. C’est précisement ce qu’on appelle un « string ».

Si je veux organiser des valeurs numériques, la façon que j’ai déjà apprise pour les organiser est une série entre des parenthèses rondes :

>>> Nombres=(23, 34, 45, 56)

Comme vous pouvez le constater, le Python requiert un ordre logique : valeur logique ou numérique en position no. 1, ensuite celle en position no. 2 et ainsi de suite.

Bref, en un premier temps, j’avais mis beaucoup d’effort pour insérer de force cette nouvelle technologie dans une vieille structure de mes habitudes, et ce n’est qu’ensuite que j’ai commencé à apprendre des nouvelles habitudes. Tout ça se reflétait dans une efficacité tout ce qu’il y a de plus lamentable. J’avançais à un pas d’escargot. Oui, je sais, un escargot, ça n’a pas de pattes, donc ça ne peut pas faire de pas, mais « pas d’escargot » ça sonne bien.

Hier soir, tout en m’entraînant à développer une compréhension aussi intuitive que possible de ces structures logiques de Python, j’avais commencé à faire autre chose : engranger des données organisées façon Python. Les tables que je télécharge du site de la Banque Mondiale sont organisées en pays et années d’observation, pays en colonne, années dans un vers en haut. Je sens intuitivement qu’une fois que je crée un string des pays et je le sauve dans un fichier Python à part, il me servira plusieurs fois pour apprendre des nouveaux trucs ou tout simplement pour faire des analyses plus tard, lorsque j’aurai appris comment utiliser se string des pays. Avec deux tests rapides pour la grammaire et avec deux petits réajustements, ça m’a pris 26 minutes de créer le fichier « », qui contient un string nommé « countries », qui, à son tour, contient la même liste des pays et régions, en anglais, que vous pouvez trouver dans les tables Excel de la Banque Mondiale.

Ce que je viens de faire consiste à réorganiser mes ressources de façon à les rendre plus accessibles à la nouvelle technologie que j’apprends. Pour le moment, je l’avais fait manuellement, en copiant la table des pays en Excel dans mon éditeur Word, ensuite en la convertissant en texte, copiant ce texte dans le compilateur Python et organisant ces noms des pays en un string « countries=[‘Pays#1’, ‘Pays#2’, etc.]. Je sais, je sais, je pourrais télécharger directement, du site de la Banque Mondiale, un fichier en format CSV et le déballer avec les commandes Python du module « csv ». Oui, j’aurai pu le faire si je savais exactement comment le faire et ça, c’est encore de l’avenir pour moi. Je faisais donc face à un choix : organiser mes ressources pour la nouvelle technologie en une manière bien grossière et improductive ou bien attendre que j’apprenne une façon plus rapide et élégante d’accomplir la même tâche. J’avais choisi la première option.

Voilà donc que je peux généraliser mes observations au sujet de mon propre comportement dans cette expérience sur moi-même. En présence d’une technologie nouvelle, j’ai développé comme trois modèles de comportement :

Modèle #1 : j’essaie d’utiliser la nouvelle technologie (Python) pour accomplir les mêmes résultats que j’accomplissais avec la technologie précédente (Excel, Wizard). Mon choix consiste surtout à décider combien de temps et d’énergie je vais consacrer à essayer d’appliquer la logique ancienne à la technologie nouvelle.

Modèle #2 : Je me suis rendu compte que les vieilles habitudes ne marchent pas et je commence à en apprendre des nouvelles. Mon choix se fait à deux niveaux. Premièrement, je fais des petites décisions instantanées à propos du prochain pas d’apprentissage à faire. Sur un deuxième niveau, je décide combien de temps et d’énergie consacrer à l’apprentissage des compétences requises par la technologie nouvelle. C’est un choix fait en vue de bénéfices futurs incertains.

Modèle #3 : Je sais que l’une des compétences de base dans l’univers de Python consiste à organiser mes ressources de base – l’information – en des structures spécifiques. Je décide de faire un premier pas dans cette direction et je consacre environ une demi-heure à créer un string des pays et régions, exactement conforme à l’ordre utilisé dans les données de la Banque Mondiale. Je me souviens d’avoir fait deux choix distincts : est-ce que je m’y prends du tout et quel type d’information organiser en premier. J’avais pris ces deux décision en sachant que j’utilise une technique qui est loin d’être optimale dans l’univers de cette nouvelle technologie, ainsi que je savais que je ne sais pas du tout comment je vais utiliser ce string des pays dans l’avenir (je n’y suis pas encore arrivé dans mon apprentissage de Python).

A propos, juste pour que vous ayez du solide après avoir lu cette mise à jour : en-dessous du texte, j’ai copié ce string des pays exactement comme je l’avais écrit manuellement. Comme ça se présente maintenant, ça devrait être directement utilisable en Python. Vous pouvez le copier et mettre dans un programme. Côté syntaxe, c’est correct : je l’avais testé avec la commande « list(countries) » et Python avait rendu une liste à partir du string sans gueuler « erreur ! ». A cette occasion, j’ai appris que le dépositoire des fichiers sur mon site , dans l’environnement Word Press, n’accepte pas des fichiers en Python. Je voulais y télécharger et stocker le fichier « » et le site a rendu erreur. Une petite leçon à propos de la compatibilité des technologies.

Maintenant, j’applique ces nouvelles connaissances, tout ce qu’il y a de plus empirique, pour faire une généralisation théorique. Si nous considérons deux technologies, une ancienne TC0 et une nouvelle TC1, la transition de TC0 à TC1 est liée, entre autres, à trois phénomènes : une substitution fonctionnelle complexe, un coût d’apprentissage CA(TC0 ; TC1) et un coût de réorganisation de ressources CR(TC0 ; TC1). Le premier phénomène, celui de substitution complexe, peut être représenté comme une relation entre deux ensembles. Il y a un ensemble F(TC0) = {f1, f2, …, fi} des fonctions remplies par la technologie TC0, et par analogie, je définis un ensemble F(TC1) = {g1, g2, …, gk} des fonctions remplies par la technologie TC1. Maintenant, avant que je passe plus loin, un petit mon d’explication à propos de la présentation : j’écris ce contenu-ci en sachant que je vais le copier dans mes deux blogs, celui en Word Press à , ainsi que celui dans l’environnement Blogger, à l’addresse . Tous les deux ne sont pas vraiment ami-ami avec l’éditeur d’équation de MS Word, donc j’écris les équations avec les symboles du clavier. Je n’ai pu trouver aucun clavier qui permet de taper directement les opérateurs mathématiques, y compris les opérateurs d’ensembles. J’adopte donc une convention simplifiée où les symboles +, -, * et / correspondent, respectivement, à la somme, différence, produit et quotient des ensembles.

La relation de substitution complexe entre technologies veut dire, qu’avec un niveau donné de compétences de la part d’utilisateurs, les ensembles F(TC0) = {f1, f2, …, fi} et F(TC1) = {g1, g2, …, gk} ont une partie commune, ou F(TC0)*F(TC1), qui correspond aux fonctions remplies par les deux technologies. Je définis le degré de substitution entre les deux technologies comme le quotient complexe : SU(TC0 ; TC1) = [F(TC0)*F(TC1)] / [ F(TC0) + F(TC1)]. Je pose formellement l’hypothèse que le coût d’apprentissage CA(TC0 ; TC1) ainsi que le  un coût de réorganisation de ressources CR(TC0 ; TC1) sont tous les deux inversement proportionnels à la valeur du quotient complexe SU(TC0 ; TC1) ou :

CA(TC0 ; TC1) = a1* SU(TC0 ; TC1)

CR(TC0 ; TC1) = a2* SU(TC0 ; TC1)

a1 > 0 ; a2 > 0

Je me dis, quand je regarde cette structure logique, qu’elle risque d’être un peu lourde avec beaucoup de technologies qui se substituent d’une façon complexe et avec beaucoup d’attributs fonctionnels. Je vois donc une façon alternative de représenter la même proposition, avec l’aide de la distance Euclidienne. Vous savez, ce truc basé sur le théorème de Pythagore : si on a deux points A et B, chacun défini par deux coordonnées x et y, on peut calculer la distance entre ces deux points comme d = ((x(A) – x(B))2 + (y(A) – y(B))2)0,5 . Maintenant, je remplace les points A et B par mes technologies TC0 et TC1, TCce que vous voulez, par ailleurs, et je dote chacune avec deux attributs mesurables x et y. Je peux alors calculer la distance Euclidienne « d » dans une paire donnée de technologies. Comme je suis toujours conscient que je devrais apprendre le Python, voilà, ci-dessous, je présente le fruit de trois jours d’apprentissage : un petit calculateur de distance Euclidienne en Python 3.6.2 :

# je commence par définir les coordonnées des trois technologies TC0, TC1 et TC2

>>> TC0=(12, 45)

>>> TC1=(34, 15)

>>> TC2=(17, 30)

>>> import math      #j’importe le module des fonctions mathématiques, juste pour me faciliter la tâche

# je définis mon calculateur pour la première paire de technologies

>>> for n in range(0, len(TC0)):

            for m in range(0, len(TC1)):

                       print(math.sqrt((math.pow(34-12, 2)+math.pow(15-45, 2))))





# je n’ai pas la moindre idée pourquoi le compilateur a affiché le même résultat quatre fois ; voilà un défi potentiel

#je répète avec l’autre paire de technologies

>>> for n in range(0, len(TC0)):

            for m in range(0, len(TC2)):

                       print(math.sqrt((math.pow(17 – 12, 2) + math.pow(30 – 15, 2))))





#voilà encore une fois le même résultat quatre fois ; amusant

Bon, je commence à ressentir de la fatigue intellectuelle. Temps de terminer pour aujourd’hui. A bientôt.

Comme promis, voilà le string des pays pour Python, conforme à la structure utilisée par la Banque Mondiale :

countries=[‘Aruba’,‘Afghanistan’,‘Angola’, ‘Albania’, ‘Andorra’, ‘Arab World’, ‘United Arab Emirates’, ‘Argentina’, ‘Armenia’, ‘American Samoa’, ‘Antigua and Barbuda’, ‘Australia’, ‘Austria’, ‘Azerbaijan’, ‘Burundi’, ‘Belgium’, ‘Benin’, ‘Burkina Faso’, ‘Bangladesh’, ‘Bulgaria’, ‘Bahrain’, ‘Bahamas The’, ‘Bosnia and Herzegovina’, ‘Belarus’, ‘Belize’, ‘Bermuda’, ‘Bolivia’, ‘Brazil’, ‘Barbados’, ‘Brunei Darussalam’, ‘Bhutan’, ‘Botswana’, ‘Central African Republic’, ‘Canada’, ‘Central Europe and the Baltics’, ‘Switzerland’, ‘Channel Islands’, ‘Chile’, ‘China’, ‘Cote d_Ivoire’, ‘Cameroon’, ‘Congo, Dem. Rep.’, ‘Congo, Rep.’, ‘Colombia’, ‘Comoros’, ‘Cabo Verde’, ‘Costa Rica’, ‘Caribbean small states’, ‘Cuba’,’Curacao’, ‘Cayman Islands’, ‘Cyprus’, ‘Czech Republic’, ‘Germany’, ‘Djibouti’, ‘Dominica’, ‘Denmark’, ‘Dominican Republic’, ‘Algeria’, ‘East Asia & Pacific (excluding high income)’, ‘Early-demographic dividend’, ‘East Asia & Pacific’, ‘Europe & Central Asia (excluding high income)’, ‘Europe & Central Asia’, ‘Ecuador’, ‘Egypt, Arab Rep.’, ‘Euro area’, ‘Eritrea’, ‘Spain’, ‘Estonia’, ‘Ethiopia’, ‘European Union’, ‘Fragile and conflict affected situations’, ‘Finland’, ‘Fiji’, ‘France’, ‘Faroe Islands’, ‘Micronesia, Fed. Sts.’, ‘Gabon’, ‘United Kingdom’, ‘Georgia’, ‘Ghana’, ‘Gibraltar’, ‘Guinea’, ‘Gambia, The’, ‘Guinea-Bissau’, ‘Equatorial Guinea’, ‘Greece’, ‘Grenada’, ‘Greenland’, ‘Guatemala’, ‘Guam’, ‘Guyana’, ‘High income’, ‘Hong Kong SAR, China’, ‘Honduras’, ‘Heavily indebted poor countries (HIPC)’, ‘Croatia’, ‘Haiti’, ‘Hungary’, ‘IBRD only’, ‘IDA & IBRD total’, ‘IDA total’, ‘IDA blend’, ‘Indonesia’, ‘IDA only’, ‘Isle of Man’, ‘India’, ‘Not classified’, ‘Ireland’, ‘Iran, Islamic Rep.’, ‘Iraq’, ‘Iceland’, ‘Israel’, ‘Italy’, ‘Jamaica’, ‘Jordan’, ‘Japan’, ‘Kazakhstan’, ‘Kenya’, ‘Kyrgyz Republic’, ‘Cambodia’, ‘Kiribati’, ‘St. Kitts and Nevis’, ‘Korea, Rep.’, ‘Kuwait’, ‘Latin America & Caribbean (excluding high income)’, ‘Lao PDR’, ‘Lebanon’, ‘Liberia’, ‘Libya’, ‘St. Lucia’, ‘Latin America & Caribbean’, ‘Least developed countries: UN classification’, ‘Low income’, ‘Liechtenstein’, ‘Sri Lanka’, ‘Lower middle income’, ‘Low & middle income’, ‘Lesotho’, ‘Late-demographic dividend’, ‘Lithuania’, ‘Luxembourg’, ‘Latvia’, ‘Macao SAR, China’, ‘St. Martin (French part)’, ‘Morocco’, ‘Monaco’, ‘Moldova’, ‘Madagascar’, ‘Maldives’, ‘Middle East & North Africa’, ‘Mexico’, ‘Marshall Islands’, ‘Middle income’, ‘Macedonia, FYR’, ‘Mali’, ‘Malta’, ‘Myanmar’, ‘Middle East & North Africa (excluding high income)’, ‘Montenegro’, ‘Mongolia’, ‘Northern Mariana Islands’, ‘Mozambique’, ‘Mauritania’, ‘Mauritius’, ‘Malawi’, ‘Malaysia’, ‘North America’, ‘Namibia’, ‘New Caledonia’, ‘Niger’, ‘Nigeria’, ‘Nicaragua’, ‘Netherlands’, ‘Norway’, ‘Nepal’, ‘Nauru’, ‘New Zealand’, ‘OECD members’, ‘Oman’, ‘Other small states’, ‘Pakistan’, ‘Panama’, ‘Peru’, ‘Philippines’, ‘Palau’, ‘Papua New Guinea’, ‘Poland’, ‘Pre-demographic dividend’, ‘Puerto Rico’, ‘Korea, Dem. People’s Rep.’, ‘Portugal’, ‘Paraguay’, ‘West Bank and Gaza’, ‘Pacific island small states’, ‘Post-demographic dividend’, ‘French Polynesia’, ‘Qatar’, ‘Romania’, ‘Russian Federation’, ‘Rwanda’, ‘South Asia’, ‘Saudi Arabia’, ‘Sudan’, ‘Senegal’, ‘Singapore’, ‘Solomon Islands’, ‘Sierra Leone’, ‘El Salvador’, ‘San Marino’, ‘Somalia’, ‘Serbia’, ‘Sub-Saharan Africa (excluding high income)’, ‘South Sudan’, ‘Sub-Saharan Africa’, ‘Small states’, ‘Sao Tome and Principe’, ‘Suriname’, ‘Slovak Republic’, ‘Slovenia’, ‘Sweden’, ‘Swaziland’, ‘Sint Maarten (Dutch part)’, ‘Seychelles’, ‘Syrian Arab Republic’, ‘Turks and Caicos Islands’, ‘Chad’, ‘East Asia & Pacific (IDA & IBRD countries)’, ‘Europe & Central Asia (IDA & IBRD countries)’, ‘Togo’, ‘Thailand’, ‘Tajikistan’, ‘Turkmenistan’, ‘Latin America & the Caribbean (IDA & IBRD countries)’, ‘Timor-Leste’, ‘Middle East & North Africa (IDA & IBRD countries)’, ‘Tonga’, ‘South Asia (IDA & IBRD)’, ‘Sub-Saharan Africa (IDA & IBRD countries)’, ‘Trinidad and Tobago’, ‘Tunisia’, ‘Turkey’, ‘Tuvalu’, ‘Tanzania’, ‘Uganda’, ‘Ukraine’, ‘Upper middle income’, ‘Uruguay’, ‘United States’, ‘Uzbekistan’, ‘St. Vincent and the Grenadines’, ‘Venezuela, RB’, ‘British Virgin Islands’, ‘Virgin Islands (U.S.)’, ‘Vietnam’, ‘Vanuatu’, ‘World’, ‘Samoa’, ‘Kosovo’, ‘Yemen, Rep.’, ‘South Africa’, ‘Zambia’, ‘Zimbabwe’]

Theorizing and learning by doing, whilst wrestling with Python

My editorial

And so I am experimenting with my thinking. I am writing about choices we make regarding new technologies whilst placing myself in a situation of absorbing a new technology: The Python. I am talking about that programming language, reputed for being particularly useful in studying quantitative data. I thought it would be interesting to write about people’s choices whilst making a choice about new technology. Just to be precise: the variety of Python I am enduring my pains of neophyte with is Python 3.6.2, and besides using the materials provided at the standard library of the Python I am using a book by Toby Segaran, entitled ‘Programming Collective Intelligence’, Published by O’Reilly Media, Inc. (ISBN-10: 0-596-52932-5; ISBN-13: 978-0-596-52932-1). I want to discover a few things. Firstly, I want to rediscover the process of discovering something new. It has been a long time since I really had to absorb some brand new set of skills and understandings. I recently became aware of that: I am writing about innovation and technological change, about whole communities switching to renewable energies but with all that, I am not absorbing any true novelty. Secondly, data science becomes kind of adjacent a discipline for social sciences, and I wanted to discover what can I get more out of Python that my statistical software (i.e. Wizard for MacOS) does not provide. So I want to discover new possibilities, and I want to rediscover what is it to discover new possibilities.

I really started this experiment yesterday. As I am recollecting what I have been doing, yesterday and today, a few patterns emerge. As I mentioned it, I started under a twofold impulse: an idea connected to my research and writing, for one, and sort of a vague peer pressure, for two. Being my own guinea pig, I assume I am somehow representative, and so I peg down those two motivations: curiosity awoken by current activity, and peer pressure, also connected to the present activity. As a matter of fact, I started with implementing Python in my current activity, i.e. in research. I tried to use the basic notations for expressing deterministic choice (see ‘Qu’est-ce que ça fait d’être mon propre étudiant, avec Python’). Then, I used Python to meddle with quantitative data, the one in the CSV format. I played with building strings and sets of values out of that format. I am progressively getting acquainted with the distinction between names, defined in single quotes (i.e. ‘x’), and values without quotation marks. Anyway, I took something I already do with the tools I already have, and I am trying to do it with Python. It is awkward, it is slow, it involves a lot of trial and error. I have to solve new problems. I am confronted with something apparently smart (the Python), which returns dull, preformatted answers in the case of failed attempts from my part. Tons of fun, in other words. Here, I have a second pattern, important for understanding innovation: every time we absorb a new technology, we kind of insert it into our current life and we keenly observe what changes under the impulse. That change involves an initial drop in efficiency. What I am trying to do with Python now, I do much faster and more artfully with my Excel and my Wizard combined.

The drop in efficiency I am experiencing is very much connected to the logic I have to use, or, in other words, to the structure of information. In Excel, I have tabularized structures. In Wizard, I almost don’t pay attention to the raw data, previously imported from Excel, and I focus on analysis, which involves picking up, by clicking, from pre-formatted lists. Anyway, both in Excel and in Wizard, I have a two-dimensional structure, where I move mostly graphically, with a mouse or a touchpad. In Python, I have logical structures akin to typical writing. I have to type all those structures by myself, or copy from Excel and format into the written way. I am becoming aware, I mean really aware that what I used to perceive as adjacent columns or rows of data are, in fact, separate logical structures linked by some kind of function. I encounter two types of challenges here. The most elementary one is the shift from reversible to irreversible. In Python, in order to have any logical structure usable in subsequent operations, like a string of names or a set of values, I have to validate it with an ‘Enter’. Still, after I have validated it, I cannot alter it. If I made a mistake, I have to define a new structure, almost identical to the previous one, save for those corrected mistakes, and to replace it under the same name I gave to that previous one. Defining functions that link different logical structures is much harder for me, at least for the moment.

As I am wrestling with the Python, I recall my early experience with software like DBASE or Lotus, back in the early 1990ies. Those pieces of software were something between the Python and the modern software base on graphical interfacing. I am reconstructing the path of learning I have covered and I am forming another pattern regarding innovation. From DBASE or Lotus, the path has been towards simplification and greater a comfort on the user’s side, whilst creating an increasing complexity on the programming side. What I am doing now, as I am learning Python, is to deconstruct that comfort and simplicity and see what I can get if I rethink and relearn the technology of comfort and simplicity. Just to give you an idea of what I am working with right now (the initiated to Python, I implore magnanimity from your part), I give the example of what I have written kind of on the side, whilst writing what I am writing for the blog. Here it is:

# I want to define a set of people, a set of technologies, and to illustrate the fact of preferring some technologies more than others. So I define my set of ‘people’, as a string: 

>>> people=[‘Geogre’, ‘Chris’, ‘Eleonore’, ‘Kathy’, ‘Miron’, ‘Eva’]

# S***, I made a ‘Geogre’ out of ‘George’; I have to define people again

>>> people=[‘George’, ‘Chris’, ‘Eleonore’, ‘Kathy’, ‘Miron’, ‘Eva’]

>>> technologies=[‘photovoltaic on the roof’, ‘own water turbine’, ‘connection to smart grid’, ‘cycling more driving less’]

>>> scores=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

# Now, what I want is to connect somehow the scores to the people and to the technologies. I want to figure out some kind of function, which would put those three categories together. Here below you can see a failed attempt:

 >>> for n in people and for m in technologies return(scores [3])

SyntaxError: invalid syntax << # What did I tell you?

#Now, I am less ambitious. I am trying to create a logical structure from just two categories: people and technologies. I just want to put them in pairs, and label that matching ‘experiments with’. Here is what I did pretty intuitively:

>> for n in range(0, len(people)):

                  for m in range(0, len(technologies)):

                                   print(people [n], ‘experiments with’, technologies [n])

# It did not yield ‘SyntaxError’, which is already a plus. After an additional ‘Enter’, I had this:

George experiments with photovoltaic on the roof

George experiments with photovoltaic on the roof

George experiments with photovoltaic on the roof

George experiments with photovoltaic on the roof

Chris experiments with own water turbine

Chris experiments with own water turbine

Chris experiments with own water turbine

Chris experiments with own water turbine

Eleonore experiments with connection to smart grid

Eleonore experiments with connection to smart grid

Eleonore experiments with connection to smart grid

Eleonore experiments with connection to smart grid

Kathy experiments with cycling more driving less

Kathy experiments with cycling more driving less

Kathy experiments with cycling more driving less

Kathy experiments with cycling more driving less

# Oouff, it worked, for a change.

Simple lesson: coming up with structures that work takes experimentation. Experimentation takes time and energy, and still, in the same time, yields learning. Experimenting with new technologies is a trade-off between the time and energy devoted to experimentation, on the one hand, and the learning we have from it, on the other hand. The next step I am trying to achieve, unsuccessfully as for now, is to connect people and technologies to those numerical values from ‘scores’. Here below, you can find a short account of my failures:

>>> list(scores)

[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

>>> scores=[list(scores)]

>>> scores [3]

Traceback (most recent call last):

  File “<pyshell#12>”, line 1, in <module>

    scores [3]

IndexError: list index out of range

>>> scores[2]

Traceback (most recent call last):

  File “<pyshell#13>”, line 1, in <module>


IndexError: list index out of range

What did I do wrong? Where is the loophole in my thinking? I think I know: what I have been trying to do was to kind of pick up arbitrarily a value from the set of ‘scores’ and connect it somehow to people and technologies. I have been trying to do what I commonly do with Excel: I just sort of tried to match it, as if I were making a table. I was trying to work with the new technology (i.e. the Python) just as I am used to work with the old technology, namely Excel. It does not work. New technology requires new skills and habits. There is that thing about logical structures: the numerical ones are different from the strictly spoken logical ones. Numbers are of different nature than words and letters. If I want to express the preferences of my ‘people’ for particular ‘technologies’, I have to construct a function of preference, i.e. a function which yields a numerical result through a mathematically valid process. At that point, I am becoming aware that I have two alternative logics of constructing such a function: through positioning or through inference.

The first technique consists in ascribing, to my ‘people’ and my ‘technologies’, some kind of coordinates in a virtual space. It is very similar to the Bayesian rectangle: I represent phenomena as a geometrical structure. This structure has a frontier delimiting its expanse, and, possibly, some additional secants, which serve to fine-tune the position of a given point in this space. Positioning looks like the right way to construct a function of preference when position matters. Position is distance of the given point (person, technology etc.) from (to?) some peg points fixed in the given space. In that book by Toby Segaran, I found an interesting reminder about the application of the Pythagorean theorem when it comes to distance in an imaginary space: a2 + b2 = c2. If you have two points, and you ascribe two coordinates to each (actually, this would be a plan, not a space, but who cares?), you can measure the distance between those two points on each coordinate separately, as a difference, then square up those differences, add them up, and take the square root of the so-calculated sum. Summing up, the technique of positioning allows ascribing to people preferences, which result from their relative distance from your peg points or peg axes: conservatism, openness, liberalism etc. The technique of inference assumes that if I have some characteristics X, Y, Z, my preferences are the most likely to be A, B, C and usually those As, Bs, and Cs are the expected values in a statistical distribution. If I am Polish and my friend is French, and France has greater a share of renewable energies in their final consumption of energy than Poland, my Friend is supposed to have stronger a preference for putting photovoltaic panels on his roof, or something in those lines.

I am wrapping up this account of my personal experience with innovation. I use the new technology, in the first place, to reach the same outcomes I commonly achieve with the old technology. I basically squeeze that new technology into the social role I have now. When a lot of people do the same, it means that the new technology, in a first step of collective learning, adapts to the existing social structure. Then, I intuitively use old habits and skills with the new technology, which causes problems, and diminishes my efficiency, and still it is lots of fun, and it makes me learn something new. Once again, as I translate this personal experience into collective patterns of behaviour, the greater the relative distance between the new technology and the old one, the more trial and error, the more learning required, thus the deeper is the trough in efficiency, and the greater is the leap caused by learning for the new technology. Relating it to the old concept of technological determinism by Karl Marx, I can see the weakness of his thinking. We assumed that the greater the economic power of a technology, so the greater possible technical improvements it can bring, the greater power it has to bend the social structure around it. Still, my little experiment brings a new variable: the greater the transformative power of a new technology, the more learning it requires and the less we can predict about how the hell it will change the social structure around it.

Qu’est-ce que ça fait d’être mon propre étudiant, avec Python

Mon éditorial

Je pense à deux choses. Enfin, oui, évidemment, je pense à plus que juste deux choses, mais c’est un tour de phrase qui est censé indiquer une pluralité dans la ligne de mon discours. Je pense des nombres et je pense en termes de structure logique pour un article ou un livre. D’une part, je continue de me demander comment je pourrais formuler des hypothèses, pour ma propre recherche à propos d’énergies renouvelables, suivant la logique mathématique de Thomas Bayes. D’autre part, j’ai l’impression que le temps est venu de traduire la recherche que j’ai effectuée durant les 4 – 5 dernières semaines en une forme plus canonique qu’un blog : un article ou bien l’esquisse d’un livre. En fait, quand j’y pense, l’un se joint à l’autre et en plus, il y a un troisième fil de pensée qui lève sa tête : la traduction de ces régressions linéaires dont j’ai déjà fait autant en des hypothèses Bayésiennes. Ah bon, y’en a un quatrième, aussi ? C’était évident : mon quatrième fil de pensée est celui d’un prof. Quand je fais de la recherche et quand je travaille sur la présentation de cette recherche de façon à la rendre intelligible, je peux en profiter pour donner un cours. Un cours de quoi, vous demanderez ? Eh bien, c’est un cours de ce que je suis en train de faire. Quand je réussis à expliquer la méthode que j’applique à un problème particulier, et quand cette explication est suffisamment simple et claire pour être comprise par mes étudiants, c’est comme si j’avais écrit un chapitre de manuel de sciences sociales.

Là, il y a encore un petit détail : pour être un bon prof, il est utile de se placer dans la situation d’un étudiant, donc de quelqu’un qui essaie d’absorber des connaissances complètement nouvelles et, avec de la chance, de développer des compétences personnelles sur cette base. Comment puis-je être un étudiant et un prof en même temps ? C’est relativement simple. Premièrement, il faut d’être déjà d’un certain âge qui est une classe spécifique dans le cadre plus large de l’âge certain et l’âge certain, pour moi, ça se compte à partir du 9 mai 1968. Deuxièmement, il faut avoir une progéniture douée, qui, dans mon cas, est mon fils qui commence sa troisième année de licence en informatique. Pour moi, être étudiant, ça veut dire apprendre Python. Pour les complètement non-initiés, Python est une langue de programmation, réputée pour être particulièrement utile dans l’analyse des données et dans la programmation d’intelligence artificielle. Ma propre intelligence, elle commence à prendre un coup de vieux, donc il est logique que j’essaie de m’en faire une artificielle. J’ai une dent en composite céramique, je peux aussi bien avoir une prothèse d’intelligence en Python. Pour les initiés, premièrement, j’implore votre compréhension pour mes tourmentes de néophyte, et deuxièmement je spécifie que lesdites tourmentes surviennent en contact avec Python 3.6.1.

Bon, j’essaye d’être logique. L’hypothèse générale que j’avais déjà avancé dans mon dernier article et que j’ai bien l’intention de maintenir est que les changements technologiques possibles à observer dans l’économie mondiale surviennent comme un processus d’adaptation intelligente en vue de maximiser l’absorption d’énergie par l’espèce humaine. L’adaptation intelligente veut dire un processus où des générations consécutives d’organismes sont produites et chaque génération est conçue par une sélection entre des organismes mâles et des organismes femelles. Cette sélection a lieu suivant une fonction de préférence, qui, à son tour, crée des hiérarchies sociales parmi les mâles et les femelles. Quel contexte que je prenne pour cette hypothèse générale, énergies renouvelables ou Ferraris stylisés en vieilles Citroën, ça se résume à une séquence des choix. Mes choix, je peux les appréhender de trois façons différentes : déterministe, probabiliste façon De Moivre – Laplace, ou bien probabiliste façon Bayésienne.

Un choix déterministe est celui où le fait de choisir une option donnée résulte toujours et inévitablement en une conséquence donnée. Le choix est alors la cause et ses conséquences sont l’effet. Si je pythonne ça, façon néophyte, ça donne plus ou moins :

>>> choix_déterministe=[‘option A’, ‘option B’, ‘option C’]

>>> conséquences_déterministes=[‘effet 1’, ‘effet 2’, ‘effet 3’]

>>> if choix_déterministe [0]:

            conséquences_déterministes [0]

            if not choix_déterministe [0]:

                       conséquences_déterministes [1] or conséquences_déterministes [2]

Vous pouvez remarquer que j’ai été bien paresseux, là : j’ai réduit un choix multiple entre trois options A, B, C à une dichotomie aristotélicienne de forme « A ou bien non-A ». Un mot de clarification s’impose à propos de la notation. Dans cette version de Python que j’utilise, le premier argument d’un string (je sais, c’est rigolo, mais si vous avez l’intention de rire chaque fois que je mentionne un string, ce n’est pas bientôt que vous allez arrêter, avec Python) c’est l’argument [0], le second c’est [1] et ainsi de suite.

Si je veux développer mon choix déterministe bien comme il faut, sans esquives aristotéliciennes, ça aurait l’air comme ci-dessous :

>>> choix_déterministe=[‘option A’, ‘option B’, ‘option C’]

>>> conséquences_déterministes=[‘effet 1’, ‘effet 2’, ‘effet 3’]

>>> if choix_déterministe [0]:

            conséquences_déterministes [0]

            if not choix_déterministe [0]:

                       if choix_déterministe [1]:

                                   conséquences_déterministes [1]

                                   if not choix_déterministe [1]:

                                               conséquences_déterministes [2]

Il faut bien que vous sachiez que ces séquences de commandes en Python 3.6.1 que je place ici sont celles que le compilateur a accepté comme valides, c’est-à-dire après les avoir validées avec un « Enter » je n’ai pas eu un de ces messages irritants genre « Invalid syntax ». En revanche, je n’en sais que dalle comment ces séquences vont travailler dans un algorithme sérieux. J’y vois une métaphore intéressante, par ailleurs. Nos choix peuvent être évalués selon la façon dont ils se présentent aux autres – c’est-à-dire les gens se demandent si nos choix sont élégants – ou bien on peut se demander de quelle façon ces choix peuvent s’insérer dans la vie réelle.

Le choix déterministe est bien primitif comme concept scientifique, mais il est bon de se souvenir que c’est précisément comme ça que nous pensons. Dans la grande majorité des cas, nous décidons tout en étant convaincus que notre choix va apporter des conséquences bien déterminées. La probabilité c’est quelque chose dont nous prenons connaissance après fait. Si je retourne donc au vieux concept du déterminisme technologique façon Karl Marx, la façon la plus élémentaire de le représenter en Python serait :

>>> choix_technologique=[‘technologie A’, ‘technologie B’, ‘technologie C’]

>>> conséquences_sociales=[‘structure sociale X1’, ‘structure sociale X2’, ‘structure sociale X3’]

>>> if choix_technologique [0]:

            conséquences_sociales [0]

            if not choix_technologique [0]:

                       if choix_technologique [1]:

                                   conséquences_sociales [1]

                                   if not choix_technologique [1]:

                                               conséquences_sociales [2]


>>> choix_technologique [2]

#produit >>> ‘technologie C’

Encore une fois, ce schéma représente plutôt la façon dont nous pensons que la façon dont les choses marchent.  Si je veux représenter les évènements comme ils se passent, je peux, par exemple, imaginer deux ensembles : un ensemble d’occurrences et un ensemble des probabilités. Ensuite, je les associe avec une règle simple « Si l’évènement A survient, l’évènement B va suivre avec une probabilité de 10%, mais à part ça, on ne sait pas trop ». En Python, ça ferait :

>>> occurences=[‘technologie A amène la structure sociale X1’, ‘structure sociale X1 favorise technologie A’, ‘technologie A amène structure sociale X2’, ‘technologie A amène structure sociale X3’]

>>> probabilités=[‘0.1’, ‘0.2’, ‘0.3’, ‘0.4’, ‘0.5’]

>>> if occurences [1]:

        probabilités [0]

        if not occurences [1]:

                probabilités [1] or probabilités [2] or probabilités [3] or probabilités [4]

C’est bien marrant d’être mon propre étudiant. Je sais déjà que si un étudiant doit comprendre quoi que ce soit de mes classes, il doit avoir ses propres structures logiques pour représenter ce que je lui dis, un peu comme son propre Python. Ensuite, il est important que cette structure logique qui nous sert à comprendre la réalité soit fonctionnelle, c’est-à-dire qu’elle fasse quelque chose. Des associations insuffisamment ancrées ne marcheront pas. Je l’illustre avec Python. J’imagine l’association d’une paire de personnes, associée avec une paire des technologies. Chacune des deux personnes a des préférences, exprimées comme des nombres des points assignés à chaque technologie. En Python, ça se présente comme ceci :

>>> préfèrences={‘Personne A’: {‘technologie 1’: 4, ‘technologie 2’: 3}, ‘Personne B’: {‘technologie 1’: 10, ‘technologie 2’: 12}}

Mon compilateur Python a avalé cette expression sans protester. Côté définition, c’est correct. Maintenant, je fais un petit test : j’écris ‘Personne A’ dans une ligne de compilateur. Il me rend le même, c’est-à-dire l’expression ‘Personne A’. J’écris ‘Personne A’ ‘technologie 1’ et j’obtiens ‘Personne Atechnologie 1’ de la part de mon compilateur. Réaction nihiliste, mais compréhensible. Si je n’établis pas une fonction qui transforme les personnes ou bien les technologies en des scores numériques, l’association d’idées en elle-même n’a pas de valeur.

Toute cette tirade a pour but d’expliquer que l’approche déterministe d’un côté et la probabiliste de l’autre ne s’excluent pas mutuellement. La réalité en tant que telle est une fonction qui transforme un ensemble de phénomènes en un autre ensemble de phénomènes. Nous, les humains, on perçoit comme des fragments de cette fonction et de premier abord, nous faisons des associations du type déterministe. Chaque déterminisme est en fait une espérance, plus ou moins élaborée, à propos de la façon dont marche la réalité. Avec de la chance, on gagne la possibilité d’observer ladite réalité et de tracer des probabilités associées à nos espérances. Lorsque j’essaie de comprendre pourquoi, aux alentours de 2007 – 2008 le taux de croissance du marché mondial d’énergies renouvelables avait tout à coup dépassé le taux de croissance de consommation totale d’énergie, c’est comme si j’écrivais plusieurs expressions alternatives en Python et ensuite, je teste en deux pas. Le premier pas, c’est la logique. Si, après avoir validé mon expression avec un « Enter », mon compilateur Python attend patiemment mon prochain pas, sans s’écrier « Erreur ! », cela veut dire que j’ai réussi le premier test. Le deuxième pas, c’est de mettre cette expression acceptable dans un contexte, comme un algorithme, ou même une simple commande tapée dans le compilateur et regarder si, premièrement, il y a quelle réaction que ce soit et, deuxièmement, si cette réaction semble utile.