Monday, April 6, 2020

Beautiful Software

There are three strange facts that force me to consider good software to be (among other things) beautiful and natural, compelling us to examine closely what these words mean.
  1. People can tell the difference between something that is natural, that is, not man-made, and something that is a human artifact.
  2. However it's measured, nature makes more complex products than we can. Of course, the human species is an 'artifact' of nature, so we can never win that game.
  3. Among the products of nature and humans, we judge the best to be 'natural and 'beautiful'.
Regarding (1) -- of course it is possible to fool people: with a photograph, for example. But when we do so, we are merely re-emphasizing that there is a perceptual faculty, in the brain, that reacts to impressions of natural structure.

Regarding (2) -- there are many wonderfully complex natural products, perhaps all of them, that are complex that we cannot comprehend how they were constructed. And yet, we know that good software must be comprehensible. This puts into the question the value of pursuing an understanding of nature's methods, since we will not be able to exactly produce things the way that nature does. However: a) we do not even understand the way that we produce things, from the point of view of the natural sciences, but we try anyway, and find it helpful; and b) the natural sciences exist in order to become more enlightened about nature's nature, in part to discover more about our own nature, which can in turn be used to improve what we do, if applied responsibly. We don't need to copy nature exactly. We just need to do better than we're doing now.

Regarding (3) -- there are many human artifacts and processes, even very popular and 'successful' ones, that are neither beautiful nor natural. Over the decades, I've found this to be a good indication of a harmful product or process. Popular and dominant is not necessarily good, but, unfortunately, by habit, and power projection through our society, these bad habits are often called 'best practices'. Learning to question these dogmas is one of the most important skills one can develop in life -- along with an ability to piece together those qualities that should replace the dogmas. This process of enlightenment, in the broad sense, should never end.

Finally, even the most beautiful and natural software will do harm if it was not created freely and coöperatively, for the purpose of healing and helping people and the ecology. Responsibility is rarely practiced in the creation of any technology, but we won't survive unless we become truly, doggedly responsible.

We'll be examining these ideas and practices in a global educational and research initiative, consisting of conferences, classes, discussions, and publications. You are all invited.


Wednesday, January 22, 2020

Software tools are lost in the weeds

All global software research and development -- on languages, paradigms for languages, native features on platforms, IDEs, services, frameworks, collaborative tools, approaches to collaboration, optimization, and analysis -- is far too obsessed with trivial, peripheral, and ephemeral details. There are big problems in the world of computing, which everyone experiences, but nobody works on. Instead, we unnecessarily take small journeys through slightly recombined sets of ideas about inconsequential concerns. Our industry is so deeply driven by puffery, that we are constantly buffeted by these small trivial changes.

If this continues, we will never begin to approach a set of tools that builds upon, and allows for, the comfortable expression of human ideas. Instead we will remain lost in an intellectual world that rejects conceptual progress, praises economic success and low-level implementation and efficiency. We will never come to agreement about the goals and concepts that should be represented and shared at the high and intermediate levels of any application.

What is the solution? It really doesn't take new tools.

What we need is for every programmer to focus on summarizing their application's purpose and structure in a brief, explanatory, comprehensible fashion -- at the top of anything that might be hard to comprehend -- and to use that summary to drive the actual, operational, implementation through a set of similarly comprehensible intermediate structures -- until the application works, and is modifiable at any of these levels, by any person. 

We've stopped looking at programs as examples of beautiful communication among software engineers. We need to focus on making them comprehensible from the top, and never stray from that goal. Unfortunately, we cross that boundary, and move beyond comprehensibility, all the time, for expediency, or to follow some technical pattern, or some conceptual orientation, or some sub-cultural jargon, or some framework ideology, or logical consistency, or other fashionable nonsense. 

Don't do it. When you begin to solve the puzzle of these initially-incomprehensible sets of new notations and definitions, or language or platform features, or dorky framework requirements, consider them less central than their designers do. They aren't your program. Push them down, consign them to some kind of container or wrapper for unimportant details ... an object or function or module or whatever you prefer ... making sure it does not conquer the readability of the operational top of your program, or any of the intermediate levels after that.

Briefly, it must be understood that a program written now is not automatically understandable later simply because a computer can operate under its direction. Denotation in any computer 'language' doesn't provide meaning in the sense of a natural language ... it's just a kind of agreement about parameterized symbolic substitution, and can be done completely opaquely, or with great compassion for other humans. The actual meaning of your program is in your head ... and occasionally in the heads of other humans you've successfully explained it to. The code on the page may 'work', but it's meaningless without this brain-provided context. 

Consequently, we need to work extra hard to express that meaning in the code, with the program itself -- with the next level up of notation and ideas that you'll provide to make the 'normal' code comprehensible. Not just in the comments. The program itself needs to explicitly express comfortable human ideas, and those highest-level-first ideas need to drive your program -- otherwise, you're creating write-only code, or your group is writing code that can only be read by an ephemeral clan of programmers. That's never going to get us anywhere. But that's what everyone is doing. Instead, try to look at programming as a social contribution for the ages. Make it beautiful. Express it clearly.

Sunday, August 11, 2019

Anything that's truly good is possible.

I started using this phrase, which popped into my head some 30 years ago, as a kind of expression of optimism, in the face of all odds. I was writing about successful grassroots community projects, local ones, those that seemed hopelessly positive or naïve to people living in the Reagan/Thatcher years, but which were in fact quite straightforward. Actually, the more ideal and positive they were, the more humane and sensitive they were -- the more straightforward and possible they became. This is because an ideal project is defined as one produced by a community through achievable, agreed upon, positive steps. 

It's perhaps even a kind of mathematical expression, describing a cycle of imagination, pursuit, discovery, and reformulation: a principle in the practice of natural science wrapped into a form that will beg good questions. "Anything that's truly good is possible". So, what is 'good'? What is 'possible'? These are left to the listener, but, the equation itself is intended to kind of poke anyone who hears it, so that their answers, even though they will change, can act as a guide to better work: in community organizing, personal responsibility, public-interest engineering, and theory formation in the natural sciences.

The counterexamples are instructive. If a marketing executive at a pop-technology company promises something that is a lie, typically through advertising, and seduces people with the 'idea' of the product by using spectacle and stimulants and money, then they're selling the impossible. That's not good. It even falls outside any real discussion about pursuing good or possible projects. So, involvement in those not-even-false promises is not something that a responsible, community-minded person should participate in.

The same is true for things we know to be bad: injustice, war, destruction of people and the environment, bad products, bad effects, etc ... no one will ever claim that these are in themselves truly good. The facts need to be disguised, with dishonest rhetoric or with ideas that are disconnected from, and uninformed by, their bad consequences. It's surely possible to do bad things, under these circumstances. But, again back to the equation: it's not what was promised and it will not be sustainable -- to which the current state of the world testifies. Any bad consequence simply cannot last. Which means plutocracy, technocracy, kleptocracy, privilege, destruction, and injustice, are ultimately unsustainable. The idealists, the ones who believe in intelligent, beautiful, compassionate, careful, creative lives for all people, in harmony with nature ... are the most practical.

If we're removing parking lots and freeways, turning them into gardens and wild areas, and we're having fun doing this fulfilling work, and we have a 'crazy' notion like "wouldn't it be great if everyone had the opportunity to do this?" Well ... we're describing a good thing. Perhaps even a necessary thing. And it certainly seems possible. So, we'll figure it out. And we can continue to use this phrase as a guide to finding the good ways to do the good things.

That's how an activist uses this equation. 

It works in the natural sciences as well, partly because 'truly good' is how we describe a truly enlightening and solid idea or result, judged to be truly good. That's what we're spending our days looking for: enlightenment. So it better be possible!

This brings me to two interrelated topics which, at first, will seems hard to square with each other: 1) how we might be able to write software in a more humane manner, and 2) the difficult natural science of uncovering the structure of concepts within the human brain.

I'll start with a story about patterns and pattern languages.

About 55 years ago, among a subgroup of people, the word 'patterns' started to be used to describe truly good ideas: something that is helpful in improving a situation, but not so specific that you'd call it a 'trick' or a 'hack' or a 'tip' in a technical manual. It's a general good idea which makes sense to both your head and your heart. An example is a 'transportation interchange', some public place well known, and very visible, that can be used for people to transfer from one form of public transport to another. Good idea.

So, a pattern language is a set of patterns that a community finds to be good, and which are good and useful at various levels of scale and consideration, in order to help them to make a better world. It's a language full of good, practical ideas.

Doesn't sound far from the equation, does it? Good = possible. 

Patterns of this sort were proposed and explored by philosopher and architect Christopher Alexander, in books such as Notes on a synthesis of form and A Pattern Language. We were friends, and one time, while taking a train to some appointment somewhere in England, I gave him my "Anything that's truly good is possible" phrase. He became quiet and lost in thought for 20 minutes. Then I think he said something like "that's good".

Patterns started to interest computer people as early as the 1960's, but this interest really picked up steam in the late 80's and early 90's, when the object-oriented programming community found them a useful vehicle for sharing ideas. 

During this time, a number of interesting notions emerged and faded out, in the normal tumult that takes place in computer industry discussions, where ideas and tagline and nomenclature are like fashion, and people, in the name of a pragmatism that disguises self-interest and capitalism, can be tribal and fickle and dogmatic.

One interesting idea that became lost: groups of interlinked patterns, the pattern languages, should be generative

They meant a number of things by this, which I'll get to in a moment, but the word itself comes from Noam Chomsky's work on generative grammar, which means a mathematical system that generates all grammatically correct sentences. Chomsky was the first to point out (while inventing several important ideas in computer science) that we have no idea how to generate all correct sentences, even meaningless ones. And this should not surprise us, since grammar is a very complex biological system. A bit of progress has been made on this problem in 60 years, but there still is no complete written grammar of any natural human language. The theory that would be necessary to even write such a grammar down, if we could, is still in its early stages.

A generative grammar would generate all sentences of a natural language, and not include any sentence that is not in that natural language.

So, 'good sentences' means 'possible sentences'. Good as judged by people as 'grammatical', possible in the sense of generated by a human's natural language faculty. That's our equation again, reformulated as a statement of interest in how the black box of human language works.

This is a formulation that reflects the motivation behind every theory in the natural sciences, no matter what stage or state the theory is in.

So a generative pattern language would allow a person to generate good programs. That is, 'the good' becomes 'the possible'. Would such a pattern language be inspiring? Yes, a generative pattern language would need to inspire the creation of good programs. Much like good ideas need to tested for their genuine goodness, and their genuine possibility, in any field. (A Pattern Language is a very inspiring book, and the bestselling book on architecture of all time.) But it could also be a stronger criteria, defining the ways in which good patterns could be combined to form new good patterns.

At this point we might wonder: where does this equation come from? 

Another example might offer a clue. It comes from the history of thinking about thinking, and tying these strands together might help us to make some real progress.

A pattern is just an idea. Well, it's a good idea. But if it's really good, it can be used as a metaphor for other good ideas. And if it's really really good, when you 'compose' it or 'integrate it' with other good ideas, the result is a good idea. This is similar to the strong definition of a 'generative pattern language' mentioned above. But now we're talking about all concepts.

In reality, the nature of 'compose' and 'integrate' here is not some simple rule, like those in formal logic. In fact, to function, to get a good idea from two good ideas, you would need to include judgment of the 'truly good' during and after the act of composition. There's no getting around that.

That 'moment of judgment' is the generative bit the pattern people were looking for. Honestly, it was why object-oriented patterns fell apart, as a guide to good work, just as any formal logic, reliant as it is on one set of ideas and rules, falls apart. The composition of the ideas into new ideas needs to include new judgement, and that includes testing the new idea for whether it is 'humane' or 'natural' or 'truly good'. And far too many software patterns, intending to simply codify an implementation of some 'theory' of software, simply didn't understand this sufficiently to do a good job creating 'higher-level' good structure. And that's why our software tools continue to suck.

But, this gives us a clue as to how to do better, and a clue as to how we should be studying human ideas from the point of view of the cognitive sciences.

It's an old notion, possibly as old as the human brain, that there are innate ideas, which other animals have as well. Our genetic endowments include ideas like 'grab' and 'climb' and 'twist' and 'move' and 'stuff' and 'hold' and 'thing' and some easy compositions like  'thing I hold' and 'stuff you grab' and 'thing I climb'. But we have something else, an indefinite kind of idea-composition, which also lets us name new ideas, and when we do name these composed ideas, they join our intuition, so they feel rather like an innate ideas (something that has misled philosophers from Locke to Quine). This is because the idea that comes to mind when you say 'screwdriver' is the same for every speaker of every language. This was recently demonstrated with fMRI imaging by Marcel Just and Tom Mitchell at Carnegie-Mellon, but Aristotle pointed it out 2500 years ago: he said we wouldn't be able to translate anything unless we had the same ideas in our heads.

So, the nature of this "composition of new ideas" is very important. It has much more difficult criteria than we know. The composition cannot be defined with simple formalisms, certainly not with theories of symbolic manipulation that are untested against the human brain.

So, for example, when the object-oriented programmer does a simple composition of one type of object with another, maybe one pattern with another, they may say that 'it works' because the machine performed, at this moment, the way they wanted. But, that's not good enough. There's no consideration as to whether the ideas and goals in the head of the programmer have been sufficiently explained, expressed, noted, and judged. In fact, the tendency will be not to do that, because of some belief that the formal compositions of working code must produce 'working code'. Well, it may run, and do what you want for now, but it's not really working, since it doesn't express a human idea. We do not know the rules for composition of human ideas, so all we can do is judge the new compositions as they occur to us, and see if they are human and truly good. 

But we don't do that. We use simpler formal composition mechanisms with simple consistency rules, and almost no judgement. This results in "write-only" code. Even if someone else can make sense of it -- and typically the programmer cannot themselves make sense of it after a time -- it has not been judged to be good, human, or natural. So it's not good and, in a sense, it's impossible and impractical.

Let's go back to theories of ideas. If we found the innate ideas, and began to untangle the mechanism for composition of ideas, we can do so in part through fMRI confirmation of the Just/Mitchell type, and in part by judging the new ideas (using the method of impression) that are generated with our theory of idea composition. 

Are those ideas any good? Do they make sense? Would they ever occur to you? We can begin to create a theory of innate ideas and their composition in human beings only if we abandon anything analogous to the formal compositions of objects and functions in programming languages, mathematics, and formal logic. Because we know those do not generate the right results. They do not generate good, natural ideas. We should remember that Aristotle's syllogisms and Boole's operators were actually just working theories about how the mind worked, or 'should' work, somewhere along the descriptive - prescriptive spectrum. They need updating with the considerations I've mentioned.

The same is true for engineering. We must abandon the dictatorship of the formalism, the tyranny of the parts over the whole, and create software that is judged on whether the formalisms used make sense as human thoughts. We try to pepper code with comments and good names. It's not enough. We need to escape the formalisms that are stopping us from thinking about how to best express our thoughts.

One consequence of what I'm saying: until we do this, no program, or even an idea, or even a result, created through machine learning, could ever become 'inspectable' or 'explainable', by us or machines. Because we have not even begun to create a theory of what is comprehensible! And we won't be able to do that until we start using our internal judgment of what is good to constrain what is done, and therefore continually discover what is possible to do well.

Consequently, I have some proposals for people in the computer industry who might be interested in the natural sciences, who might be interested in doing some good for the world, who might be interested in helping to uncover something about how the brain actually operates, and who might want to fill the considerable gaps in our understanding of how the human brain of software professionals could do better, easier, and more natural work.

The first would be to avoid irresponsible work in technology. Since all work in technology these days is pretty extraordinarily irresponsible, this will take quite a bit of self-education and inspiration on the part of the reader.


The second would be to correct irresponsible pseudo-science in the computer industry. While most of this is the result of marketing and self-deception among the successful, it's really rather galling that the computer industry has been the center of a revival of behaviorist and associationist 'theories' which were rightly buried decades ago in the cognitive sciences. This will take quite a bit of research and thought on the part of the reader.

The third would be to help with actual research on modeling and testing the construction of human ideas from innate ideas using innate mechanisms. Essentially, we're trying to find out more about the human mind, and trying to find out how it is that the brain-internal fMRI for the word 'screwdriver' was constructed from other ideas, innate or previously abstracted. If we could focus on this collective effort, we might make some progress on this problem before the species becomes extinct.

The fourth would be to avoid these simplistic ideas that pretend to be the solution, the 'end of programming': functions, rewriting, categories, objects, classes, ML ... it's really quite nauseating to even hear this kind of fashionista religious dogma spoken by engineers. Engineering is something done by the human mind, and until we start approaching it as an internal issue, an issue with mental comfort, and begin to map out an approach to working more naturally, no progress will be made on making better software.

I understand it is incumbent upon me to better elucidate these initiatives. 

More soon.



Sunday, January 1, 2017

Constant Human Capacities

There's a set of cognitive blinders, found both in everyday life and in academic work, which need to be recognized. I'm tempted to call the actual 'instantiation' of a cognitive blinder a 'fallacy', but on deep examination, that term is almost meaningless outside of the artificial environments of formal logic. Let's just call it a 'mistake' -- a conceptual construction that, outside of the conceiver's awareness, relied on an innate blinder that is known. We address mistakes, in a sense, by becoming aware of them -- by participating in rational discourse about them -- in the same way that we try to recognize other innate human perceptual or cognitive phenomena, such as optical illusions.

Let me describe two of these mistakes, which I encounter often.

Shocking

The first I'll call the "future shock" mistake. This is the idea that "humankind is experiencing accelerating change", which is among the presumptions and conclusions of Alvin Toffler's Future Shock. He goes further, blaming this acceleration for social changes that he does not like.

You could read a version of the "future shock" mistake in any era that has passed down a literature. No matter the actual seriousness of the problems, there are always times when things appear to be moving too fast. People who were previously comfortable become alarmed at change, and write books about how abnormal the rate-of-change is. If the change is positive for others, those people will say that change hasn't come fast enough.

So, it's nothing new. Nothing 'future' about it. 

The 'shock' that most people in the world experience at any given time might arise from technological or demographic change, as the book proposed. But far more shock emerges from how people are treated by governments, institutions, corporations, and each other. There are many other factors of course -- weather, disaster, novelty, loss. But, let's clarify with a comparison: the 2003 invasion of Iraq caused vastly more global and individual shock than the pervasive technologies that emerged over the last 50 years.

Shock is personal. It happens within the human mind. The brain, and its innate capacity for shock, never changes. We're the same species, with each of us nearly identical to each other, over the last 50,000 years.

So the shock of being invaded has always been the same. And it's bad. We need the invading to stop -- or the lobbying, the privilege accumulation, the elitism, the bullying, the threatening, etc. These ancient problems demand far more immediate attention than bourgeois discomforts with "technological rate-of-change". If we want to address serious consequences of technology, we should fight the resource wars in Africa caused by our demand for cell-phones and batteries, and demand accountability and sustainability from corporations and governments. 

Stopping the wrong technologies, and promoting the right ones, sounds good, but that can only happen after we make progress in the fight for a real participatory democracy in modern times, which would give people the leverage to address the serious problems. That's the priority, and the root of the other problems. Nobody actually evaluates the utility of technology because the public has no collective, conscious influence over its development. The public is simply 'the market': the consumers. They can only choose as individuals, from among those limited and useless things presented to them.

Sometimes commentators ignore major factors, and invent others, in a kind of "search for the obscure": a drive to say something new, sell more books, and book more lectures. We'd ridicule anyone who said "there's a paradigm shift going on among cats: because of youtube videos, domestic cats are experiencing a kind of accelerated cultural change that they've never experienced before. How will they cope with this?" It's silly, because we know cats will cognitively cope with this "paparazzi paradigm", or not, on a case-by-case basis. Kind treatment is their more immediate and critical concern. The same is true for people.

Era Discrimination

... is the second mistake. It's the idea, generally speaking, that people 1,000 years ago were somehow more 'primitive' or more 'stupid' than we are. This is obviously the same kind of discrimination that continues against native peoples today, or the working poor at any time. Thanks in part to Franz Boas, the profession of anthropology took this mistake to the trash-heap long ago, but it still survives in all kinds of modern bigotry. 

With cash-backed corporate-propaganda driving technology discussion these days, of course indoctrinated future-enthusiasts will snicker at the 'ignorance' of the past. My favorite example is a NOVA episode where some modern construction engineers concluded that a sketch for a giant wooden crossbow, by Leonardo da Vinci, was fantastically antiquated, primitive, unenlightened, and incorrect. Over the hour show, we follow the top-notch engineers as they gradually realize that they have no real experience with timber-based mechanical engineering, and Leonardo's sketch was exactly right. Forget whether or not da Vinci was a genius -- why did they believe that they, in their lifetimes, had somehow accumulated more skill with wood than a professional engineer 500 years ago? 

So, why do I find a connection between this 'modern man' hubris, and the 'shock' notion that people will need 'new tools to cope' with change?

In both cases, pundits have managed not to think of others as real people, like themselves, with full lives, full of difficulties, bursting with energy and ability. This is the nature of bigotry. And, frankly, it's just bad science. 

Instead, we should admire the craftsperson of a thousand years ago. Good luck trying to recreate anything they did! And believe in people today -- they can cope with change. But do your best to make sure that everyone treats each other with respect and human kindness. We have a great deal of hard work ahead of us.

Monday, February 29, 2016

Computing as Mental Construct

Perhaps 14 years ago, late-to-the-game technology pundits became excited about a particular model of growth: user-created content. With today's vast content farms, such as facebook or twitter, everyone is intimately familiar with, and contributes freely to, businesses using this model. It was presented as a counterpoint to the production and consolidation of professional content. Both models are intended to fulfill monopolistic dreams. But earlier-to-the-game technologists pointed out that even the earliest web successes depended upon user-generated content.

All this misses something much more fundamental: the computer industry has always made money from user-generated content. The reality couldn't possibly be different. Before I explain why, I need to provide an example.

You could start anywhere. In the late 1970's, normal people happily produced handwritten and typewritten letters and memos. Suddenly, five years later, they were all using their personal computers to send blurry-inked dot-matrix printouts to each other. 

All of this required user-generated content. The benefits of the technology were massively and illogically overstated, with seductive advertising evoking riches and robots. 

In a sense, people were being sold to themselves. "Imagine what you could do with this" says the suggestive advertiser. "Yes, I can imagine that!" says the imaginative consumer, who then buys the expensive computer, so they can 'make things'. They also bought expensive professional entertainment products, so they could 'relax'. This is exactly the same situation consumers find themselves in today: applications for generating your content, and others for kicking-back and letting the professionals distract you.

These are just innate qualities of people. We want to watch, and we want to build. People who want power and influence will always take advantage of innate qualities ... in a sense, this actually defines their profession.

Consumer-producers aren't totally unaware of this manipulation. But they are natural optimists. They assume that some potential utility underpins these techno-cultural shifts 'towards a better future'. They assume that 'early-adopters' need to be watched carefully, in the hope of catching a ride to heights of leisure and leverage. 

People are also natural builders, which leads us to a deeper reason behind the pervasiveness of user-generated content in computing: 

Every tool intends to aid the generation of user content.

But there is a deeper reason still: 

Tools are within us

I mean that quite literally. 

Without a human brain to notice that another human brain has produced a tool, and without more human brains writing and speaking about the ideas behind the tool, the chunk of stuff that 'is' the tool, is just a human by-product. 

It is a chunk of stuff that human brains might find exciting, stimulating, beautiful, useful, useless, ugly, complex, simple ... but like the chunk of stuff, all those words have meaning only to the human brain (with some understanding that we evolved from animals that may have related ideas, also within their brains, that is, they make sense in their umwelt, or worldview).

So, if a tool is something that only our brain understands, then the very idea of using a tool is 'user-generated' content. Some other human may have said something to you to spark the idea, to help you to construct the idea, but it's your idea, or you wouldn't have it. It is, of course, essentially the same idea, now in more than one brain. The word screwdriver produces the same recognizable structure in the brain no matter which language is spoken, a recent result from studies at Carnegie-Mellon, long-anticipated by some: Aristotle pointed out, in the opening paragraphs of On Interpretation, that this must be true, or we wouldn't be able to translate anything.

So, novel or not, any tool is inside the brain. We reconstructed the idea of the tool. What we do with it next is totally the product of our brains. 

The same is true with the idea of computation itself. The computer is a tool that is in our brain. There is no 'physics of computation' ... computation is not some basic 'natural force' ... computation is a human mental construct. And every piece of a computer, every line of code, only has meaning to the human brain. Of course it's doing something in the outside world, and the consequences can be beyond the scope of human understanding. But we cannot begin to understand a computer program, or a computer system, without first taking this stuff we've created and giving it our intellectual effort, an intellectual effort we do not understand, but which we make use of. Our brains are tools, which we use sometimes consciously, but mostly unconsciously, in a foggy groping towards an internally-defined 'understanding' or 'awareness' of what we do, and an even foggier understanding of what is going on outside of our brains.

Our world is user-generated. Not the world, of course. Just our world. The intrinsic one. The one in our heads. The one we experience. Mostly it is the same as other people's, because we are the same species. That's why we can all construct the idea of 'screwdriver' or 'computer' in our brains. We are  not conscious of how we do it, in the same way that we are not conscious about our digestive system, or our ability to walk. But the world is still constructed by us, or else how could we experience it the way we do? We know how flies have difficulty seeing some things, things we can see, and that our pets can hear things that we cannot. They are constructing different worlds because they have a different biological endowment.

Much of what the human brain generates, whether 'ideas', which we might 'communicate', or which may 'produce something', is new in some aspects ... but much is the same, in other aspects. The stuff that's the same is similar because, like the fly, we homo sapiens have our limits, our habits, and our strengths. 'History repeats' because we're all human. Bonobo chimps also repeat themselves, with some differences, including some repetitions and differences that we can never know, because we cannot become Bonobos.

That's why so many 'revolutions' in human culture seem similar. The human brain is still the place where culture itself resides, so the differences between the old and the new are going to be, well, less than 'revolutionary'. We can easily exaggerate or cartoonify anything and call it a 'revolution', or 'progress', or a 'paradigm shift', et cetera. That's what people do naturally, and it's a hard habit to break, because we're all chimps and we get very excited when we suddenly see something in our minds that seems new or helpful. 

But those things are in our brains. When we generate content inspired by those things in our brains, it's genuinely gratuitous to tell someone "well, we've seen that before". Who cares? Everything is somewhat new and somewhat something that anyone can do. 

Time for the best example.

Think of language: very few of my sentences, above, were ever spoken or written before. That's something that all of us do, everyday. Complete novelty. The smaller ideas expressed are a little less new, but still pretty new because they were combined in new ways, using new examples, et cetera. The larger ideas and themes, are frankly far less new ... in fact even I have been harping on about them for decades ... but maybe you've never heard them before! 

But, whether you have heard them, or not -- if we agree on some of them, and remember them, we can begin to form a culture, a better future, around these agreements. If these become working assumptions for, say, a new approach to computing, based on natural science and an awareness that 'this is all in our heads', we might actually make some 'progress' towards extracting computing from the pervasive misunderstandings and exaggerations, which distract us from doing good with these tools. Our work will still be user-generated, but we'll be mindful of the brain's place in what we produce, and that should help us to grow a more thoughtful computing community.

Maybe I'm just looking for more quiet, rationality, and feeling, in the very noisy world of computing.

Sunday, November 22, 2015

Explanatory Development

We consider masters in a craft to know what they do -- but in a very limited sense. If they are really good, they know they do not know much about what they are doing. They are only conscious of certain kinds of things, and they do their best to make use of the means provided them -- but they'd be fooling themselves if they thought they knew much about how they, or any human beings, actually do what they do. The best they can do, is to explain what-and-how well enough so that another human might be able to understand it. But what is really going on inside our heads when we do something difficult? The answers are far beyond the reach of current research.

Computer programming is also a craft, but we've made almost no tools to help explanation, because we're not in the habit of thinking of explanation as important. The lack of tools for explaining a program while developing it, to ourselves or anyone else, continues to reinforce our non-explanatory habit. This is discussed occasionally -- small tools pop-up regularly -- but no consensus on the importance of explanation has even begun to emerge. This is strange, considering what we know about what we do.

A program has a direct, complex effect upon a complex machine -- an effect that humans spend much time and effort corralling and defining as carefully as they can, so the resulting computer operation tends towards their expectations. Without people, everything about symbols and symbolic manipulation, involving some 'automation' or not, in any of the formal sciences -- logic, mathematics, computer engineering, etc. -- is meaningless. Without people, it's not possible to know whether a program is 'correct', because the measures of 'correctness', the desiderata, let's call them the 'acceptance criteria', remain only in the heads of people.

We make code meaningful to us. The symbols in our programs are simply artifacts, markers and reminders, whose real meaning resides within our brains, or within the brains of some other people. Providing meaning to these symbols is strictly a human experience, and, more importantly, providing meaning to my code is strictly an experience of mine. I may have found a way to make a machine do something I want it to do, but the purpose and meaning of the symbols that have this effect on the machine are only understandable in human terms by another human being if we are part of a team that is somehow sharing this meaning. That is, only if I code with explanation as a primary principle.

Some of the code may be more comprehensible if we're part of a highly restricted and indoctrinated coding community. This can implicitly provide a kind of ersatz explanation, limited in duration to the programming community, or fashion, in question. These don't last long.

What does endure is a broader explanation, which keeps human universals in mind. This needs a first-class status in my code, must be integrated with it, and re-written continually, to keep my own thoughts straight, and to keep my potential readers, colleagues, and users, as completely informed as possible. 

For example, say that I have some business logic in my program, regarding the access to different features provided to different types of users. We often call this an 'access control layer' today. But am I making that logic visible to other human beings, such as my support staff, or my testers? How am I inventorying the "features" in my code that users have access to? If, say, I have a webapp that's essentially a dashboard, something often called a 'single-page application' today, how have I identified all the "parts" and "wholes" of this beast? Is all this comprehensible to anyone? Or is it buried in code, so only I or a handful of people can see what's going on? Instead, I should make an accessible, running guide to the actual live features, and the actual live access layer, in the actual live code, so that I and others can see everything.

Well, why wouldn't I use this 'guide', whatever it looks like, whatever approach I decide to take, to 'guide' my development? Why wouldn't I take my ideas about the specific system or application, and make those central, through the guide, to its actual development, maintenance, operation, and explanatory documentation, for the sake of myself and everyone else?

Of course this relates to notions in software architecture like an 'oracle' or a 'single source of truth'. But there are two ways I'd like to see this taken much further: 1) the guide should be pervasive and central to everything, from the organization and navigation of the code, to the description of the features, to the purpose of the product; 2) the guide should be geared towards people, including the programmers themselves, in their most humble state, when their most sensitive capacities as human beings are exposed. This should include an appreciation for living structure, beauty, and human limits, with a watchful eye upon our tendency to confuse models for reality.

By 'guide', of course, I'm not advocating any particular 'format'. I only mean any approach that values ideas, explains ideas, ties those ideas accurately and directly to the relevant code or configuration, allows for code consolidation, and explains abstractions, with an operational "yes we can find the code responsible for x" attitude towards making the system transparent, and any 'x' comprehensible. This puts a far greater organizing burden on the explanatory structure than you would find in Literate Programming documentation, for example. 

It has nothing to do with using accepted 'definitions', accepted 'best practices', 'patterns', or any other pre-baked ideas or frameworks. It has everything to do with taking your ideas and their explanation, and using them to orient yourself and everyone else to anything in the application. 

Our development environments and platforms need to support this deeply operational explanatory activity. 

Currently, none do.

Tuesday, October 6, 2015

Our perception of 'analog' and 'digital' in the natural world

The words 'analog' and 'digital' sound rather precise. About a century ago, introspective engineers were inspired by these basic notions to create operational definitions and analytical tools that make use of these two labels. These can still be useful -- not so much in the natural sciences, as I'll explain below, but occasionally for setting goals in an engineering workplace. But these historic and highly-constrained technical terms are something of a distraction for any serious investigation into the human nature of 'analog' and 'digital'. The key to approaching these notions is to remember that 'analog' and 'digital' are ideas, brain-internal entities, universally available to normal human cognition. Like all ideas, they recruit mental faculties that interact in complex, unknown ways. The ways they interact in the brain seem, in this case, to tend towards mutual exclusion.

When the idea of 'digital mechanism' is associated with something in the world, it always entails an understanding that this 'mechanism' exists within an 'analog world'. It reacts in a not-quite-sufficiently-concomitant fashion, evidence for which is representable by discontinuous functions -- in other words, the 'digital' system reacts with 'unusual prominence' after some 'threshold' has been reached. Of course, anything in the natural world can be found to have this 'character', and often human investigators are very keen to determine the expression and causation underlying a subject's 'digital' character. 

At the same time, the same subject of investigation will reveal 'analog' characteristics -- again, when the investigator looks for them. Unfortunately, this is true of perhaps the entire range of human-observed properties of complex systems: something that is 'whole' also seems to have more-or-less 'distinct' 'parts', something that 'acts' also seems to be 'passive', something that 'flows' also appears to be 'static' ... and special training is typically required to regularly resolve these apparent contradictions when taking different conceptual-perceptual approaches. 

One could characterize 'analog' and 'digital' as 'projections' of human cognition upon our limited sensations of the world around us, real and imaginary -- the same cognition and sensation that makes us feel that we know 'the world'. When we call something 'analog' and 'digital', we're mostly looking at ourselves, using mental equipment that, when genetically intact, required stimulation during development to prevent atrophy. 'Analog' and 'digital' are universal, in the sense that, at the least, all humans can conceive of them. They are our biological inheritance.

We can test this by observing the world. What are simple examples of something in nature that is both analog and digital?  We observe a system that's near its tipping point -- such as a stick precariously perched on its end, on a fence, in the wind -- the 'analog' gradual movement will reach a threshold, and then 'the system', something which we define as observers, will achieve its 'next digital state'. 

Another example is a 'loaded' system -- using a slingshot, pull a rock back until it is also in a precarious state, then let go, which is a gradual 'analog' thing from one perspective, but could be seen as a 'digital switch' from another perspective, in which the rock 'discretely' moves from the state of being 'in the slingshot' to being 'out of the slingshot'. Any system, since it is observed by a human, has these properties, but we shouldn't despair -- although these properties appear together, one will typically dominate at one point, and another at another point, and sometimes the simultaneous perspectives are very easy to separate and identify. That is, one perspective can help us to construct a more enlightening analysis than the other ... although these are interdependent properties, so we can't ever totally discount one.

Since this is an epistemological issue, where the culprit is always human cognition, this 'finding what is important for a particular perspective' is possible even of complex affairs, far beyond the edge of what would be considered appropriate research subjects for the natural sciences. The other day, talking to someone who studies urban policy, I pointed out that a well-known government program, 'Urban Renewal', was very destructive of people's lives and neighborhoods. I gave one example, where a proposed $200 million Urban Renewal expense would have destroyed an entire neighborhood. He agreed, but then pointed out that the urban renewal fund had made one investment that was quite good. I agreed, but pointed out that this was a $2 million investment. We were both making correct points about the fund, but the massive downside of the large investment vastly outweighed the upside of the small investment, meaning we could say 'Urban Renewal' is a bad idea, by precisely two orders of magnitude.

That said, if one was studying projects that made a small number of people wealthy, with no concern for anything else, then clearly Urban Renewal would be 'good' in this sense. There are certainly people who make such assumptions.

Likewise, some natural systems have a 'digital' aspect that is far more prominent than its 'analog' aspect, given the interests of the researcher. In other cases, analog systems are more prominent. But the the key is to define 'prominent': prominent by which criteria of interest? It is possible to get 'out of our skins' a little better, and examine our interests.

In the natural science, 'most prominent' is 'that effect or aspect without which nothing would happen'. Again, there are multiple factors, and we bring multiple interests to the table, but we can examine these. 'Relative importance' or 'requiredness' or even 'essence' is often considered a mysterious idea. But not if we willingly examine ourselves, as part of our research. In the sciences, we need to continually re-examine our dogmas, judgments, and criteria for intelligibility, when considering what is 'important' in any investigation. And, in the case of internal, instinctive notions like 'analog' and 'digital', we need to try to recognize when our instincts are getting in the way, as they usually do, of further enlightenment regarding the world outside, and inside, ourselves.

Sunday, August 23, 2015

The Inventory Problem

We don't quite know how many cells are in the human body, because it's a hard problem, but we estimate it to be around two to the fiftieth. But, even if we had a 'better number', we know that the human body is more than "just cells" ... and of course cells are more than "just molecules", molecules are more than "just atoms", and atoms are more than "just energy". We know that there is a human psychological tendency, and a strong tendency among people who think they are being 'scientific', to assert that a complex system consists of "nothing but" some studied factor in its makeup. You know the sort of thing: the human body is just water and chemicals, biology is nothing but physics, etc. We used to call this 'reductionist' and 'materialist', but I think that's giving too much credit to what amounts to blind dogmatism among people who have no sense of just how little we understand about the universe.

What's the higher-order structure 'above' the level of cells? Well, it's just anything we call a 'system' that we believe is interesting. When we explore the real body we find that the boundaries and the coherence of our chosen 'interesting system', say a kidney or an immune system, are not what we expected. But, then, we should expect such surprise. Humans perceive certain things in certain ways, often in several conflicting ways, and when we decide to look at 'the visual system', or 'the nose', we are making use of some still mysterious human faculty that assigns importance to certain aspects of its environment. On examination, we are always surprised, because our unexamined intuition tends to be wrong. In fact, even our attempts to divorce ourselves from our intuition tend to be heavily suffused with human tendencies. It's a tough game to find out what's going on outside of your own perception, to turn the things your intuition 'knows' into mysteries about what's 'out there'. But that's natural science. It's hard work, especially when we're dealing with complex systems.

This means there really is no ontology of the body which is more than a kind of convenience, so that we can talk about what we're studying. Ontology is important in science, because these are our current assertions about what exists in the world. Of course they are constantly changing, and much of it is based itself on tacit knowledge that we do not understand, which is why I don't think many kinds of science make sense without a parallel study of human psychology. At any stage, our assertions are still human mental constructs. They may be more enlightening, better integrated with other theories, or more carefully constructed to avoid unnecessary stipulations, but they are only 'better'. They aren't 'complete'. Ontologies are works-in-progress, at best.

This epistemological story can be found everywhere in computing. Let's take the issue of testing a software system. In our example, let's say that we primarily care about a consistent  user experience, and so the tests take place against the user interface. What is the inventory of features against we we are testing? It certainly is not the set of features we set out to build in the first place: in order to make a good product, we had to change these. The closest thing to an accurate description of the final system is the work done by the documentation team. If you have such a thing. The team has used human judgement to decide what is important for someone learning about the system. They have organized what they consider to be the 'features' of the system, and explained their purpose and behaviors, as best as they could. This is the closest thing the software company has to an inventory of features and properties against which a QA team can build a testing system. In a system where the interface is everything, and there are a lot of systems like that, and a lot of systems that should be considered like that but aren't, the only way to build reasonable tests is after-the-fact. There is a discipline called 'test-driven development', but this is only appropriate to certain internal aspects of the system, it cannot address the 'logic' that is 'externalized' for the users. There is no such logic in the code. It's a perception of the system, used to guide its development.

If this is true, there is no way to take a 'feature inventory' from within the software. The best one can do is study the user-interface, find out how it responds, talk to developers and product designers to work out their intentions when they're unclear, and keep a coherent-looking list that is easy-to-understand. This is literally not an inventory in any mechanistic sense. It is a thorough set of very-human judgments upon something that others have created.

The 'inventory' will be acceptable, and have descriptive adequacy, when the appropriate group of people can understand it. This might be a very different inventory for a quality-assurance team than for a training team or a support team. There are things the designers and the engineers find important that produce yet another 'inventory'. There are other kinds of inventories, for accessibility issues. The best you can do, in all these cases, is the most human job you can do, to explain the right things to the right audience. The idea that there is any kind of 'logically correct' software, achievable without human judgement, is absurd. A person needs to judge what is correct! We couldn't do any of this work without human judgment. 

Because of this epistemological fact, we rarely have the time for inventories of features. Instead, we look to eliminate 'problems', humanly judged, and polish the software system until it makes sense and does what the team and the users want it to do. The task of describing it, of explaining it, is done in a minimally descriptive way, taking advantage of innate and learned human understanding, and the ability of users to explore things for themselves. The quality-assurance team finds some set of tests that satisfies them, tests for problems that have been fixed, regression tests to make sure the problems don't recur. The notion of a 'complete' description of the system is considered 'just too much work', when, in fact, a 'complete description' is impossible, because such a description cannot exist, it can only be adequate to our current purposes.

This epistemological problem shows up in simpler ways. One approach to preventing virus infection in computers is to add to a growing 'blacklist' of behaviors or data that indicate an 'infection'. The other approach is to make a 'whitelist': only these operations should be possible on the system. The list is only expanded when you want to do something new, not when someone else wants to attack you. This is like avoiding the inventory problem.

Even more, it's reminiscent of the difference between natural science and natural history. Natural history, zoology in its older form, and even structuralism, are about cataloguing and classifying things in nature. Explain why things are the way they are? That's natural science.  In derogatory terms, natural science looks to generalize and idealize and abstract, ignoring as many differences as possible. Natural history embraces diversity, and is more like butterfly collecting-and-organizing. In general, we need integrated approaches that allow for collecting diverse facts in the context of an an ever-improving explanatory theory. 

Approaches to building software are ever-expanding, and we are spending no effort trying to understand why, primarily because computer science is not a natural science, and doesn't approach the problem of explaining why things are one way, and not another. Most of the answers to those questions lie in a study of the human mind, not in a study of the machines that humans build. Studying software without studying cognition, is like studying animal tracks without studying the animals.

Thursday, August 20, 2015

The Wrong Tools

We are using the wrong tools to program, and the wrong criteria to judge good programs, and good programming practices. These bad practices, and bad approaches to thinking about the nature of programming, have emerged together over the last 70 years.

Our first mistake is the emphasis on code itself. I understand how high-level languages can seem very empowering, and so it seems natural that 'polishing code' is a means of achieving quality, and 'code standards' are means to improve group collaboration.

But even though these are accepted practices, they are not correct. When we make any improvements at all, we are not actually following these practices. The issue of what we are doing is not even a topic, and it's not examined in any kind of computing institution, academic or industrial. This is true despite the fact that everyone with any experience or sensitivity is absolutely certain that there's some fundamental expressiveness problem, on which they can't quite put their finger.

Let's say that code has two purposes.

On one hand, we have built machines and programs that can read the code, which does something, based on various kinds of agreements, mostly unspoken, mostly not understood, not studied, not explicit, and incomprehensible, but which maintain an illusion of explicitness, precision, consistency, and predictability -- probably because there are symbols involved, and we instinctively tend to respect symbols and construe them as meaningful in and of themselves. 

The other purpose of code is to express and explain your thoughts, your hopes, and your desires, to yourself, and to your colleagues, specifically regarding what you would like this system of loose engineering agreements to do in all kinds of circumstances.

At the heart of both of these 'uses of code', the operational and the expressive, are human meaning and ideas. These are not understood by the machine, in any sense. We take subsets of human ideas and create "instructions" for "operations" in the machine that in some way are reminiscent of these ideas, usually highly-constrained in a way that requires a great deal of explanation.

That's on the operational side! This is just as true on the expressive side, where we have new ideas that we are trying to express in highly-constrained ways that still can be read by humans, on these interlocking systems and platforms of strange high-constraint ideas. And of course -- most of you can guess -- these "two purposes" of code really are the same, because most programmers build various layers of code that are essentially new machines that define the operational platform on which they then express their application ideas.

Which means that the code is the least important part of computing. The mind-internal meaning of all these constraints needs to be explained so that they can inspire the correct meaning in the minds of the humans taking various roles relative to various parts of the software. 

Without explanation, code is meaningless. Without human minds, symbols are meaningless. Code is "operational", so we are fooled into thinking the meaning is 'in the machine'. But that meaning is also in the heads of people, who either made the machines or use them.

If this is true, then good explanation -- explanation that is genuinely useful, which genuinely makes the life of people involved easier and better -- is the heart of computing. This needs to be recognized, emphasized, and facilitated. Code is merely a kind of weak shorthand that we use, badly, to pass hopeful, incoherent indications to artifacts that other people have created.

Existing formal languages and their tools -- based on a uselessly-constrained approach to symbolic representation -- are woefully inappropriate for this, based as they are on a rather trivial formal definition of computation, which has been accepted because of the rather amusing belief -- no more than an unexamined dogma -- that anything can be 'precisely described' with symbols, boolean logic, parameterized functions, and digital representations. None of this is "true", or even sensible, and the belief in these dogmas show how far computing is from the natural sciences.

In the meantime, we need new programming tools with which we can more completely express new concepts, and more easily express our feelings and desires.

Monday, June 1, 2015

Theory and experiment in the natural science of computing

Although outdated, I find the perceptual-conceptual division, in the history of psychology, to be quite interesting. On the one hand, we could just be dismissive of the division today. The argument runs something like this: categorical thinking is employed in a preliminary way in the sciences, but it is inevitably 'wrong', because categories are mental constructions ... the world itself, including the world we construct, isn't made from categories … so the perceptual-conceptual distinction is one of those discardable stages of science, and we should just move on.

But, still, there is a real mechanism, within the brain, which we still do not understand, which leads us to produce categories, properties, objects, and other mental constructs, about the world. This mechanism, whatever it is, 'makes' our continuous world discrete. The primary argument in favor of abandoning the distinction -- that concepts can effect perception and visa-versa, making them intractably intertwined, tempting neologisms into existence like 'perconception' or 'conperception'  -- means the distinction is something we need to abandon as an investigative principle, but not as a subject of investigation. "Continuous" or "field-like" conception-perception is an innate phenomenon. "Discrete" conception-perception is also an innate, and very complex, aspect of mental life, engaged in the use of symbols, for example, and probably rooted in some aspect of our language faculty. But I don't see many people studying either one, let alone the overlaps between them. 

In computing, knowing that these conception-perception phenomena exist is very important -- unfortunately, the mind-internal nature of discrete descriptions seems to be continually forgotten by computing's chief practitioners, in industry and academia. This is clearly a detriment to their understanding of symbols, meanings, implementations, communications, definitions, and uses, from the perspective of the natural sciences. But it's also completely understandable that they've become victims of this problem, because the belief that we have some kind of direct mental connection to the outside world is an instinctive illusion, one of the most powerful ones -- a curtain that is difficult to lift.

Among the most fascinating of the influences of conception upon perception, is the effect that awareness of different "conceptions of perception" can have on the subject. 

There are several examples, but I'll take one that many people are familiar with. Take someone who draws or paints from real life onto a two-dimensional surface. In one way or another, an artist learns an idea, a concept, that there's an extra kind of perception, that allows them to "flatten" what is in fact a flat projection on the retina, but which our visual system works mightily to "inflate" into a perception of space. The artist learns to "reflatten" this perception and look for proportions and shapes among the the results, which can be used to produce the 2d image.  In fact, this is something that one can practice mentally, without drawing or painting. And the sense of proportion that's learnt by this exercise helps produce 3d sculpture, which is also interesting.

The conceptual awareness that this kind of perception is possible, makes it achievable.

I'm more interested in a somewhat different concept-to-percept influence, mentioned earlier: the perception of things as objects, categories, properties, etc. Some of this work is done innately by the visual system, of course -- for example, everyone is familiar with the color divisions of the rainbow, although there are no such divisions in the spectrum itself.

But the naming of the perceived groups of colors is an "objectifying" act, that is, something we choose to do, no matter how innate the mechanism we use to do it. From the limited impression we get within the conscious experience, it seems like the same kind of act as, say, our imagining that the nervous system is a discrete network, or  treating people as interchangeable nodes in a corporate hierarchy, or almost any kind of counting, or the mental separation of 'things' that we perceive as separable.

Because there's another way of perceiving, which we also do naturally, and those ways-of-seeing are apparently in something like a competition. We also see things as 'stuff', or 'fields', for lack of a better word, or 'centers': perceivables that are coherent but which we have not yet considered 'objects' with 'boundaries' etc. This kind of 'stuff-perception' allows us to see, or experience, the sky, or a field, or a forest ... without thinking of it as an 'object'. One can think of 'reductionist' mental transitions, for example from "forest" to "trees" to "potential logs", as a kind of gradient from "field perception" to "object conception".

Not surprisingly, awareness of the existence of these two kinds of perception can help a person to decide to use one, the other, or both. This is interesting to computing in a number of ways. 

First, it means that any task making use of a 'mode of perception' could benefit from explicit pedagogy about their psychological reality -- their mind-internal reality. Although it's visible in some research, and common in personal training, the best approaches to a pedagogy of 'modes of perception' is unrigorously studied. The field of architecture is a good example: Christopher Alexander created a number of books intended to help the reader perceive buildings and cities in the 'field-like' manner, but the effectiveness of the books in achieving this has not been studied. Readers just try it, and see if it works for them. That doesn't really get us anywhere.

Second, an explicit awareness of these distinct modes of perception can help us to identify particular mental factors, qualities, and effects, that enter into the use of computer interfaces, including those interfaces used by programmers, and allow us to judge the quality of the outcomes of the use of those interfaces. 

I believe these distinctions could unleash an experimental goldmine.

So now I'd like to discuss briefly one story from the history of psychological experimentation. 

There's a very good book by Robert S. Woodworth, from 1931, entitled Contemporary Schools of Psychology, which describes the various ideas and motivations of researchers that he knew or worked with personally. It describes the 'method of impression', which allows us, for example, to look at an optical illusion, like Mach Bands, and consider our impression an experimental result -- we can say 'the illusion is present in this image' or not, allowing us to test questions about the nature of the stimulus and the response.

Psychology emerges from philosophy in the 19th century, and so issues of consciousness were quite important to early experimental psychologists. When an investigator asks a subject 'which weight seems heavier?' when they are lifted in different orders, the investigator is relying on the impression of the subject. The primary interest is the effect on consciousness, even though this is a very objective experiment, when properly carried out.

But a reaction to this emerged, from the animal psychologists, who Woodworth describes as feeling "dismissed". The psychological establishment at the time felt that, although animal conscious experience likely exists, it can only be supposition, since we cannot ask them anything, and hence no rational psychological experiments could be carried out on animals. 

This reaction to this became Behaviorism. The tried to define 'objective' as some activity that you could measure, without giving credence to a subject's interpretation (their claim, that this 'increase in objectivity' was achieved, was rejected by many, at the time, with the same rational epistemological arguments we would use to reject the claim today). This allowed them to experiment with factors of instinct and experience that entered into animal behavior, and put humans and animals on equal ground. Unfortunately, they also threw the baby out with the bathwater. They had one overriding theory of the animal, or the person, and that was the association of stimulus with response, whether learned or innate. Presuming the underlying mechanism, behind your subject of inquiry, is a terrible approach to theory construction, because you won't even notice the effect of this assumption on your observations. 

A perfect example was the behaviorist John Watson's replacement of the term "method of impression" with "verbal report". The latter he considered 'behavior', and so it was 'acceptable science', and this way he could include previous work on Mach Bands, or afterimages, or heaviness, or taste distinction. We can see the danger Watson introduced here: the experimenter was now assigning a meaning to the verbal report. So even more subjectivity was introduced, but now it was hidden, because the theory of mind, the theory of that which generates the behavior,  was no longer explicitly part of the research. 

This methodological dogma had another effect -- the generation of many decades' worth of meaningless data and statistical analyses. When you decide that you've suddenly become objective, and turned a potentially structure-revealing impression into a datum, then you have no choice but to collect a great deal of data to support any conclusions you might make from your study, to lend it, I suppose, some kind of intellectual weight. A corollary is that this tends to make the findings rather trivial, because you're no longer constructing interesting questions, but are instead relying on data to confirm uninteresting ones. The upside of this, is that you can quickly determine whether, say, afterimages are only seen by certain people under certain conditions. The downside is that the investigator is no longer building a model of the subject of inquiry, and so tends to ignore the subtle and interesting influences on the perception of afterimages. The theories that emerge then lack both descriptive coverage and explanatory force. Of course, many investigators did build models, and also followed puzzling results, so, at best, one can only say that behaviorism had a negative influence on the study of cognition as a natural science, but not an all-consuming influence. Most behaviorists were not extreme behaviorists.

But, to return to our original theme, the negative influence is no one's fault. Natural science tends to defy our instincts, and there were innate cognitive tendencies at work here -- tendencies that are still not studied -- which led mildly dogmatic researchers to simple 'objectifying' stimulus-response models. Behaviorism expresses a point of view that we return to, often. In the computer world, you hear it a lot, and not just in the poor cognitive methodology that pervades AI. Even idioms like "data-driven" and "data is objective" are meaningless by themselves. The phrase begs the really important questions: which data, generated how, while asking which question, based on what theory, and interpreted how, and by whom, framed by which assumptions, and making use of which human or non-human faculties? The idea that 'objective data' somehow 'exists', without an experiment, without a context for interpretation, without the influence of human meter-builders and human interpreters, is just not correct. But people tend to make such claims anyway. It's an innate tendency.

So, what would good psychology as a natural science look like, when applied to improved understanding of the mental life of computer engineers?

If we're looking to build better theories of complex systems based on evidence, we can't do much better than to look at the eponymous scene in the documentary "Is the man who is tall happy?", in which Michel Gondry interviews Noam Chomsky about linguistics.

Take the sentence "The man who is tall is happy". A native English speaker will report that this is grammatically correct, and if you're one, you can use the 'method of impression' to demonstrate that to yourself. Now, turn the sentence into a question. You can do this grammatically through movement: "Is the man who is tall happy?" You can also see that the following variant is not grammatical "Is the man who tall is happy?" A native speaker would just say it's wrong. 

But why do we move the furthest "is" from the end of the sentence to the beginning? Why not the first "is"? Let's just say that scientists enjoy mysteries, and puzzles about nature, and so we need to build a theory, to answer that question, and hope that the answer can be more broadly enlightening -- which, among other benefits, means the theory could predict other results.

The overall answer, is that language has its own innate structure. The second "is" is actually the structurally most prominent element in the sentence (you can demonstrate this to yourself with a tree diagram), and so the easiest to move.

This would be impossible to determine simply by statistical analysis of text with no specific questions in mind. The use of statistics is motivated by our ignorance (I'm not using that word pejoratively) of the structure that underlies, that generates the surface behavior. The separation of variant and invariant structure can be made through analysis, including statistical analysis, of verbal reports, but only if there is a question in the mind of the theorist about the generating structures. Any statistical analysis that does not officially have an explicit structural question to answer, is only hiding its assumptions and stipulations about structure, by adding interpretations at the beginning and the end.

Note that these structural theories are idealizations -- nature is not transparent, and we have limited attention, so we need to know what we're asking questions about, and what we're not asking questions about.

Notice also how much further we can get in formulating structural questions when we accept the 'method of impression' along with the probing strengths of our mysterious capacity to think. There are plenty of questions about the human mind that require massive data sets to answer. But it's unlikely that any of those questions would be interesting if we weren't using, explicitly or implicitly, the method of impression, so we could narrow those questions sharply. Moving forward towards understanding the "act of programming", as a natural phenomenon, will require that everyone understands the power of this method, which has enabled so much progress in linguistics (Noam Chomsky's initiatives), and art & design (Christopher Alexander's initiatives), over the past 60 years.