Why don’t they teach modelling in schools? Part II

4 Oct

Say what you’re not saying, don’t say it, say what you didn’t say

Last time I blogged that modelling is not limited to software engineering, play and simulation; but is universal in human endeavour. I mentioned that considering accuracy is important but not sufficient in assessing a model. What other considerations are there?

My favourite lens for looking at a model is abstraction. In philosophical terminology, abstraction is about grouping concepts together at decreasing levels of detail. So, a duck is a duck and no other thing is a duck (no matter how it looks or walks or sounds); but applying abstraction allows us to talk about birds and say useful things, which might be rather exasperating if we had to list every bird in the world to say them. This kind of classification is a particular feature of object-oriented programming languages (which may or may not be a good thing).

Leaving it out

A modeller, not saying

However another way of considering abstraction is to pause before asking what a model is saying, and ask: what is this model not saying?

The model of biological change that we call evolution has incredible empirical support, so that its application has great explanatory and predictive power (some would even say that we don’t apply it enough). Strangely, though, it seems to cause an awful lot of consternation to those who subscribe to another model called creationism.

Why strange? At first sight, both of these models deal with how the world came to be the way it is. But evolution models a process, and has nothing whatsoever to say about how that process began, or why it began, or who began it. Conversely, creationism says nothing about how its proposed agent went about his craft (well, usually). He just did it. Apples and oranges.

Putting it back in

Any critical analysis or use of a model has to carefully stick to assessing or building upon what it actually models. This might sound simple, but humans find it remarkably tricky. We are fond of making cultural and doctrinal assumptions and applying intuitions without knowing about it. (In a black alley, a black cat spies a black rat. How?*)  Unfortunately this is not only inevitable, it’s usually necessary.

Why so? Models almost always rely on background information. Of particular interest in computer science and artificial intelligence is the notion of semantics: the meaning of symbols. Tell a robot to fetch you a cuppa, and it may suffer the same semantic confusion as is now affecting US readers: a cuppa what?

Hokey-cokey

However, problems arise when the semantics are ambiguous: and I submit that they almost always are. I find in my job that when presenting a model I have to spend a good chunk of the conversation heading off potential misunderstandings with sentences like, “Note I’m not saying there’s a connection, just that Professor Guo was in the Study at the time and you don’t use Lead Pipe to do Next Generation Sequencing.”

Schools concentrate on implanting into children a kind of approved default semantic background to equip children to understand what models are saying. I believe it is just as important to teach them how to question what models are not saying—and to be careful about filling the gap inappropriately with assumptions, intuitions, or beliefs.

*It’s daytime

Image from: http://www.aip.org/history/einstein/images/ae76.jpg

One Response to “Why don’t they teach modelling in schools? Part II”

  1. Kevlin Henney December 8, 2011 at 4:56 pm #

    One minor clarification: abstraction is not necessarily concerned with classification and generalisation, although it is often used in support of classification and generalisation. It is sometimes simply a means to simplify, to eliminate detail unnecessary and distracts from the task at hand.

    A model of a business, whether in terms of cash flows or programmatic objects, is an abstraction without being a generalisation or classification. If I were to compare different businesses with respect to the same criteria, the resulting abstractions would form the basis of classification, hence the common coincidence and, hence, identification of abstraction with that process.

    Likewise, the London Underground map represents an abstraction of London with respect to the Underground system, but it is not a generalisation of London. To compare metropolitan rail networks in different cities I would, of course, look at corresponding abstractions.

Leave a Reply