SHAPING THE EXPERIENCE OF YOUNG AND NAÏVE PROBABILISTS

. This paper starts by assessing deficiencies in teaching statistics before summarizing research that has focused on pupils’ misconceptions of probability. In contrast, in previous research has explored what pupils of age 11-12 years do know and can construct, given access to a carefully designed environment. These pupils judged randomness according to unpredictability, lack of pattern in results, lack of control over outcomes and fairness, as indeed would experts. However, it was only through interaction with a virtual environment, ChanceMaker that the pupils began to express situated meanings for aggregated long-term randomness. That data is then re-analyzed in order to reflect upon the design decisions that shaped the environment itself. Four main design heuristics are identified and elaborated: testing personal conjectures, building on pupil knowledge, linking purpose and utility, fusing control and representation. It is conjectured that these heuristics are of wider relevance to teachers and lecturers, who aspire to shape the experience of young and naïve probabilists through their actions as designers of tasks and pedagogical settings.

between 11 and 14, statistics includes the following elements: (a) the handling data cycle; (b) presentation and analysis of grouped and ungrouped data, including time series and lines of best fit; (c) measures of central tendency and spread; (d) experimental and theoretical probabilities, including those based on equally likely outcomes. In practice, teaching approaches tend to focus only on the techniques such as (b) drawing histograms, (c) calculating averages and interquartile ranges, and (d) calculating probabilities with little reference to the contextual basis for statistics as intended in (a).
Certainly, there are developments in research and curricula, which are trying to remedy these deficiencies. For example, some teachers have begun to deploy exploratory data analysis (EDA) techniques to involve pupils in the manipulation of data drawn from real world contexts and to avoid the procedural computation of descriptive measures of average and spread or the mindless application of formal statistical inference, which can be thought of as the mere turning of a handle to produce statistical information. The danger, under such circumstances, is that the statistics produced and the techniques used are misapplied and inappropriately interpreted.
Technology can also make the situation worse: the use of calculators or software tools such as SPSS can lead to a lack of awareness of the limitations and constraints that should be applied to the automated techniques.
For example, SPSS is beautifully designed to support the knowledgeable user by providing tools that are powerful and yet easy to apply. However, the simplicity of application can lead to unintentional abuse by naïve users. By way of illustration, consider one of the simpler statistical procedures, a test of the hypothesis that the means of two populations are different.
SPSS supports data entry and computes the p-values almost at the touch of a button. The correct use of such a procedure however requires an understanding of whether the two samples are independent or paired, whether there is reason to believe that the standard deviations of the two populations might be different, and how to interpret the p-value itself. The power of SPSS can enable naïve learners to obtain a result without a proper appreciation of the underlying assumptions and the outputs provided by the system. This is not a criticism of SPSS itself since it is designed as a tool for knowledgeable users but an example of how technology does not necessarily support the teaching and learning of statistical ideas.
Similarly the unthinking use of calculators in mathematics can lead to some loss of skills in mental arithmetic. However, banning the use of calculators in mathematics classrooms also means that many pedagogic opportunities for the effective and imaginative use of calculators are missed. Similarly, ignoring the opportunities that digital technology offers statistics teachers would be a terrible pedagogic waste.

TEACHING PROBABILITY WITH TECHNOLOGY
There is now emerging software which has been specially developed as an educational tool, a tool for learning statistics, rather than a productivity tool for doing statistics (the category reserved for SPSS and similar packages). Two examples stand out, Fathom (n. d.) and its younger sister, Tinkerplots (n d.). In a sense, these packages have perhaps been inspired by dynamic geometry software such as Cabri Geometry (n. d.) or Geometer's Sketchpad (n. d.) and software such as Fathom or Tinkerplots are sometimes referred to as Dynamic Statistics software.
Certainly they exploit the digital possibilities of dynamic representations and visualisation to provide intuitive tools for learning about statistics.
However, as with calculators, all tools, whether inspired at the design level or not, can be used in ways neither in keeping with the reform agenda around the world, nor as envisaged by the designers. As a teacher and a software designer, the first author's own work over many decades has been inspired by LOGO and the corresponding constructionist movement (Harel & Papert 1991). The constructionist vision imagined pupils playing with tools to build artefacts, whether virtual or not. The building process would promote enjoyment and purpose through a sense of control and ownership. Projects that began through explorations of the media would enable mathematics to be perceived as useful. In practice, many teachers have in the past used LOGO in their classrooms in ways very different from the constructionist philosophy and, perhaps as a result, LOGO has almost disappeared from mathematics curricula, which have developed along an opposite path to that envisaged by the original LOGO developers.
Teachers therefore have to act as designers themselves in deciding how to employ such tools to shape the way pupils think about probability and statistics. From research on 'students' thinking-in-change' (Noss & Hoyles 1996) about probability and its relationship to the design of software tools, some findings emerge about pupils' learning of probability but also about teaching probability. Interestingly, because this work reflects on design of tools, the findings have implications too for teachers of probability as they seek to design the learning experience of their pupils. Below, some of those implications are elaborated. We know from the experience of Logo, that teachers will take from radical software what they think they can use and so carefully designed teacher in-service courses need to emphasise the pedagogic imperatives built into the software design.

DESIGN HEURISTICS
This paper proposes heuristics that might usefully guide design decisions for pedagogues, who might be building software, creating new curriculum approaches, imagining a novel task or writing a lesson plan. The term "heuristics" is used with its standard meaning as a rule-of-thumb or a guiding rule, rather than in the specialized sense that has been adopted by researchers of judgments of chance. With this aim in mind, the data on research on pupils' meanings for randomness as emergent during activity is analysed again.
We begin by deconstructing the title for the paper, starting from the end, in order to expose various assumptions that will help to place the paper within a particular theoretical stance.
Probabilists. The design heuristics reported below are the result of research on meanings for randomness and chance, aspects of probability which separate the topic from exercises in set theory logic using, for example, Boolean algebra, but which are tuned to a modelling perspective on randomness. In fact, the intention is that the designer/teacher with interest in statistics more generally will nevertheless find resonance with the heuristics, which have relevance beyond probability, perhaps even to other areas of mathematics.
Young and naïve. In fact, it seems there are probabilists at one level of naivety or another at all ages, after all, Fischbein's (1975) reported Kindergarten children as being able to discriminate relative frequencies. Furthermore, the strongly influential studies of Kahneman, Slovic, & Tversky (1982) and others show how adults, including those who are statistically trained, often make judgments of chance using heuristics, which are subject to systematic bias.
Additionally many other misconceptions relating to the understanding of probability amongst a wide range of ages have been identified, for example, the equiprobability bias (Lecoutre 1992) and the outcome approach (Konold 1989). Naivety, or worse fallibility, appears to be endemic.
Perceptions of randomness itself have been studied by many researchers, though again the predominant analysis has led to the identification of errors. In reviewing and classifying this research, Falk & Konold (1998) identified two categories, generation type tasks, where subjects were required to predict outcomes from a random experiment, and recognition tasks, where people were expected to state which sequences had previously been generated by a random mechanism. Such studies found a tendency for people to anticipate too many short runs of results in a random sequence and to regard sequences containing long runs as non-random. Green's studies (1983), which confirmed the tendency of people to reject runs, acknowledged that, alongside a failure to understand the role of independence in successive trials was the failure of effective reference to pattern recognition and unpredictability.
A constructivist perspective requires us to acknowledge that new meanings are necessarily built on previous knowledge resources. Design therefore involves identifying mental resources that might act as building blocks for a more sophisticated understanding. In line with Smith et al (1993), who argue for a re-conceptualization of misconceptions, the intention is to emphasize competence and promote an understanding of how naïve mental resources might be supported as they become increasingly sophisticated, a process diSessa (1993) has referred to as "tuning towards expertise".
Shaping the experience. This phrase is imbued with theoretical assumptions. No selfrespecting educationalist fails these days to espouse constructivist credentials, forcing us all into admitting that our influence on learning as educationalists is at best indirect. Shaping what pupils learn is the best to which we can aspire. But the nature, never mind the size, of the influence of factors such as setting, tools, culture, motivation, belief systems, identity and existing knowledge, to name but a few, are not well understood. Nevertheless, it is taken as axiomatic that such factors are influential and hence attention needs to focus on how designing software tools can have an indirect influence or, in other words shape, the pupil experience.
The focus of this paper, at a theoretical level, is on how designers of software might leverage opportunities available within their operational setting to have influence over the way in which learners construct stochastic meaningfulness within that designed world. Although the heuristics, which will be presented, emerge from the design of software to research the relationship with young pupils' stochastic meanings, the findings will in this paper be reinterpreted to offer conjectures that might resonate for designers in other contexts; this includes people who might not even think of themselves as designers, such as curriculum planners or innovators and classroom teachers or lecturers. This paper presents new unpublished findings and a conjectured extension for teachers of probability at all levels.

EXISTING AND EMERGENT MEANINGS FOR RANDOMNESS
We present a brief synopsis here of research that has previously been reported (Pratt 1998;Pratt 2000;Pratt & Noss 2002). The study used a design research approach (Cobb, et al 2003) to build a domain of stochastic abstraction, ChanceMaker (n. d.) in which 11-12 year old pupils were able to simulate everyday random generators (gadgets), such as coins, spinners and dice ( Figure 1 shows the dice gadget). Readers who download ChanceMaker from the Internet address in the references will be able to explore interactively some of the ideas below by following the suggestions in the boxes. Many of the gadgets were by default programmed to behave in non-standard ways, perhaps with a bias to one outcome or another. Thus, in Figure 3, the die is biased towards the outcome 6. The pupils were challenged to identify which gadgets were "working properly" and which were not.
The pupils had access to a strength control (in practice this was artificial). By pulling the strength control, the pupils could simulate the throwing of the gadget though in fact the strength would have no effect on the actual result, only on how long the simulated animation would last. In practice, as the pupils played with the gadgets, they often expected the strength control to affect the results. Only as they experimented with the strength control did they convince themselves that for example sixes could occur whether they throw the die with 100% strength or 50% strength. During this process, they discussed which gadgets seemed to be working properly. They often conjectured that they could identify patterns in the results as those results were generated one by one and they typically made predictions as to what would happen in the next "throw".
Eventually, claims about patterns, which were numerous, were discounted in the face of contradictory evidence. No pattern was sustained over longer periods.
They were then further challenged to "mend" any broken gadgets so that they behaved according to the pupils' expectations of randomness. The pupils could control the behaviour of the gadgets by editing the Workings box; a pupil encountering the Workings box in Figure 1 might decide to remove one 6 so that it would then read "choose-from [1 2 3 4 5 6 6]", and later after further amendment to read "choose-from [1 2 3 4 5 6]". The pupils were also able to generate as many results as they wished in a relatively short space of time using a repeat control.
In Figure 1, the pupil has generated 10 (where 1 did not occur) and 20 throws and the corresponding pie charts.
The window on the pupils' emergent thinking about randomness was provided by the iterative design process; this indicated that the pupils used a range of expert-like meanings described as local (in the sense that these were all situated in the short-term hereand-now); the pupils recognized as random those gadgets which were unpredictable, uncontrollable, unpatterned or fair, a result consistent with Toohey's findings (1996).
Fairness was an important resource that the pupils were able to use to construct more sophisticated global meanings (an aggregated long term perspective on randomness).  When the pupils opened up the gadgets, they continued to generate results as before except that they were now able to generate results more quickly using the repeat tool. For example, pupils typically generated ten results and observed the results box. They then repeat the experiment to collect an additional ten results. The default setting for the software is that results are accumulated unless a new experiment is begun. Sometimes therefore pupils collected results without a direct intention to do so. By observing the findings box and inspecting the charts, the pupils began to form conjectures about the frequency of occurrences of different outcomes. For example, pupils usually recognised that something seemed to be wrong about the die gadget. They turned their attention to the workings box and edited it to redress the imbalance in the number of 6's that they had observed. However, when they generated ten results in a new experiment, they were surprised at the pie chart, which appeared not to be fair. They then typically over-corrected the workings box and would struggle to find a workings box that seemed to generate fair pie charts, until eventually realising that behaviour seemed to be different and more predictable for larger numbers of trials.
In their interactions with ChanceMaker, the pupils began to articulate heuristics, which were specific but nevertheless, appeared to describe behaviour across the gadgets. These articulations were characterized, for example, as "the more times you throw the dice, the more even is the pie chart" to describe the observation that fairness in the pie chart (as opposed to in the appearance of the gadget) would manifest itself as equal size sectors when trials were repeated large numbers of times. In order for this heuristic to be expressed clearly it was sometimes necessary to challenge pupils by asking them to repeat their experiment for a small number of trials, even after they had found the pie chart to be "fair" when using a large number of trials.
This causal-like heuristic (more trials cause a more even pie chart) seemed to capture the essence of what the expert might refer to as the Law of Large Numbers.
A further challenge was to explore what happened when the workings box did not reflect equi-distribution. Some pupils were gradually able to articulate the heuristic: "The more even is the workings box, the more even is the pie chart, provided the number of trials is large", which appears, in expert terms, to acknowledge the role of distribution.

HEURISTICS FOR SOFTWARE DESIGNERS TO SHAPE THE EXPERIENCE
The data in the original research has been re-analysed from the perspective of the software design. Whereas, in the section above, the focus was on the pupils' thinking-in-change about randomness within the ChanceMaker setting, in this section, the emphasis is reversed to reflect on the ChanceMaker design in the context of how pupils' thinking changed. This reanalysis has yielded four main design heuristics that were significant in shaping the pupil's ChanceMaker experience and may have wider relevance: Testing personal conjectures, Building on current knowledge, Linking purpose and utility, Fusing control and representation. The pertinent design heuristics are described and analysed below.

Testing personal conjectures
The ChanceMaker pupils articulated the four local meanings for randomness described above but the pupils' use of these meanings was very sensitive to small (from the designer's point of view) changes in the setting. Thus, a pupil might characterize a spinner as random because of the feeling of being unable to control the outcome (lack of control), perceived through the spinning action, but the same pupil might judge a die to be random even without throwing it by reference to its apparent symmetry (fairness). Some situations might have been seen as exhibiting more than one behaviour in contradictory ways. For example, an irregular spinner, which was not fair, might have been seen as not random from that perspective and yet also random since it could not be precisely controlled.
Such contradictions were unproblematic for the pupil, who simply used whichever meaning was stimulated by surface aspects of the situation. In this sense, the local meanings for randomness appeared to exist without reference or connection to each other. diSessa (1993) has proposed that such observations are consistent with a view of conceptual change as fragmented, knowledge being conceived of as distributed across many small pieces. This knowledge-in-pieces profile of the pupils' local meanings for randomness led to the careful incorporation in ChanceMaker of opportunities to test out personal conjectures, since it was felt that pupils needed to be able to recognize that their meanings lacked some robustness if they were to construct more sophisticated meanings in the process of tuning towards expertise. As a result, ChanceMaker was designed to incorporate: • Animations and histories of results, intended to suggest that patterns in sequences of data generated by a gadget as identified by the pupil were illusory in the sense that on further experimentation the patterns were not sustained.
• An editable Workings box, so that pupils could test whether for example the order of the numbers on the spinner was important. • A control over the strength of the throw, which was in fact a redundant control in the sense that strength had no effect on outcome, intended to suggest that strength was less important than might have been assumed.

Building on current knowledge
Testing personal conjectures in a well-designed environment can lead to a position of cognitive conflict (see, for example, Piaget 1985). Piaget asserts that mental development emerges out of attempts to resolve cognitive conflict in a process of equilibration. While, according to Piaget, assimilation requires adaptation by simple integration of the conflicting ideas into an existing mental structure, breakthrough development involves a restructuring referred to as accommodation. However, as a designer, it is necessary to provide pathways that both anticipate, indeed stimulate, such cognitive conflict and yet at the same time those pathways need to offer a resolution of the imbalance. In the preamble, we described dissatisfaction with simply reporting on pupils' fallibility and indeed we have reported above on pupils' expert-like local meanings for randomness, and the root of this dissatisfaction as a designer lies in the failure of misconceptions research to elaborate such pathways.
We sought to design into ChanceMaker a mechanism by which these local meanings could be exploited and coordinated, a process motivated by the notion of tuning towards expertise.
In this sense, we go beyond the often espoused notion of cognitive conflict by aiming to provide a pathway through that conflict to some emerging resolution. With these aspirations in mind, ChanceMaker included the following two elements. i) Gadgets, which were intentionally designed to look and behave as far as possible like their everyday counterparts, so that prior intuitive knowledge might be triggered. At the same time, the gadgets could be opened up and mended or changed, actions that are usually very difficult, if not impossible, with everyday materials. It is often commented in critiques that perhaps the pupils did not regard the computer's quasi-randomness as the same phenomenon as might be observed through an ordinary material die. In fact the data in the study does not support the idea that the pupils made such a distinction. The pupils were occasionally sceptical in the first instance but such doubts were forgotten or resolved when they experimented and found that the computer gadget obeyed the pupils' own criteria for judging randomness.
ii) Graphical representations, such as the pie chart, which showed fairness, not as a property (or non-property) of the die itself but of the aggregated results it generated, opening up the possibility that the pupils would re-attach their notion of fairness from the gadget to its outcomes, providing the necessary pathway to a resolution of cognitive conflict.

Linking purpose and utility
Elsewhere, we have discussed the necessity that tasks are designed in ways that pupils regard as purposeful (Ainley, Pratt & Hansen 2006). Purpose is seen as a design construct in that those who might aspire to shape pupils' experience might attempt to design settings and tasks with engagement of pupils in mind. Experience has suggested that design efforts can be steered by certain heuristics such as building and mending tasks, problems that raise curiosity, and areas of controversy. At the same time, we have argued that purpose in itself is insufficient and must be linked to utility, a cognitive sense of the scope of the key mathematical concepts. For example, a purposeful task may or may not lead to a sense of the domain of applicability of a situated meaning for a key concept, in other words its level of specificity or generality. Designing to link purpose and utility emerges from constructionist aspirations (Harel & Papert 1991) to involve pupils in their own abstracting process.
In ChanceMaker, the pupils were challenged to mend broken gadgets and this proved to be an engaging and consuming pursuit for them. It was however key that this task also led to a sense of utility for distribution as represented by the Workings box. The pupils came to understand how the distribution, or rather the Workings box, is a predictor of the proportions of results, but that the validity of this predictor is somehow predicated on the number of trials. From a design perspective, the crucial decision was to point the ChanceMaker activity around mending which led inevitably to engagement with the Workings box.

Fusing control and representation
In fact, that utility stemmed from using the Workings box to control how the gadget behaved. The constant editing and re-using of the Workings box, coupled with feedback in the gadget's animation, the history of results and graphs of aggregated results, rendered the controlling Workings box meaningful. Indeed, gradually as the pupils became increasingly familiar with the nature of that control, they were able to predict behaviour of the gadget from the appearance of the Workings box. In this sense, the Workings box had become a representation for them of distribution. It seems that the link between purpose and utility can be facilitated by building salient mathematical representations as controls within the domain of abstraction in question.

HOW TO SHAPE THE EXPERIENCE OF NAIVE PROBABILISTS
The four design heuristics above have emerged from the systematic study of pupils' thinking-in-change about randomness. To what extent are those heuristics tied to the specific operational setting, involving software design, young pupils and randomness? This is, of course, difficult to judge without further research about enhancing the meaningfulness of probabilistic concepts in four possible ways, which merit further investigation: • providing space for the testing of conjectures; • identifying and building on current pupil knowledge; • linking purpose and utility; • fusing control and representation.

Conjecture 1: "Meaningfulness of probabilistic concepts can be enhanced by providing space for the testing of conjectures"
In higher education there are logistical constraints such as the very high student to lecturer ratio (at least in the conventional lecture format) that push the pedagogic approach towards a stand-and-deliver format. Lectures proceed at the speed of the lecturer, which is unlikely to provide the time and conditions under which pupils would be well-positioned to test personal conjectures. Nevertheless, such students are relatively independent and have out-oflecture space for testing their own ideas, if only they knew how to go about this task. In secondary schools (at least in the UK), the assessment regime pressurizes teachers and pupils into covering the syllabus, which can militate against opportunities for sense-making by pupils.
Nevertheless, the need to provide this sort of space does not appear to be fundamentally a consequence of using digital technology and is consistent with reform agendas in many countries.
ChanceMaker was designed to provide feedback in numerous forms and this feedback was crucial in helping the pupils to recognize whether their personal conjectures held explanatory power.
Feedback forms are highly dependent on the structuring resources available in any particular setting and may be difficult to organize in traditional contexts.

Conjecture 2: "Meaningfulness of probabilistic concepts can be enhanced by identifying and building on current pupil knowledge"
Research on mathematical cognition in general has traditionally placed its emphasis on identifying pupils' misconceptions. Though such work continues, it is now more common to consider the cognitive and socio-cultural conditions under which normalized thinking might be constructed or supported. In statistics education, it is still the case that much research focuses on people's fallibility. A new research effort is needed to understand the intuitive strata that underpins pupil thinking and new methodologies have to be employed in order to exploit that understanding in the way we offer up our subject disciplines.
The ChanceMaker study not only identified the key role, for example, of pupils' appreciation of fairness but also found how pupils could make use of that naïve knowledge, in particular the pupils' notion of fairness, to construct a more sophisticated understanding of randomness and distribution.

Conjecture 3: "Meaningfulness of probabilistic concepts can be enhanced by linking purpose and utility"
It is common to hear pupils both in higher and secondary education despair of the lack of connection and relevance of mathematics and statistics to their lives. However, purpose is no more of a synonym for relevance than is utility for usefulness. The ChanceMaker pupils were not conducting an experiment that they could see would be useful for them outside of school or indeed in their future work. Neither did it have obvious and direct value for forthcoming examinations. Rather, the purpose was generated because their curiosity was aroused. They needed to know, for their own personal satisfaction, how the gadgets might be mended because of some sort of innate human response to problems of this kind.
The tools provided operationalised that need so that the pupils could act concretely and creatively towards satisfying that need. They were in fact prepared to suspend any sense of reality while they engaged with the task. The utility that was constructed was related to an appreciation of how the Workings box impacted upon the pie charts in the short term and in the long term. In this sense, they learned a fundamental truth about the scope of distribution, though for them the knowledge was not appreciated in such grand terms. Again, linking purpose and utility may be easier when using digital technology (though it is still difficult) but utility is a fundamental, though often ignored, aspect of conceptual understanding, and must be given a much higher priority by teachers and curriculum developers.

Conjecture 4: "Purpose and utility of probabilistic concepts can be linked by fusing control and representation"
There is no reason why in principle the above three conjectures should not have validity in conventional settings related to probabilistic learning, or indeed more generally mathematical learning (even if the logistical constraints make them difficult to apply). Conjecture 4 may however remain an obstacle even in principle for conventional settings. Papert (1996) has argued that in conventional settings, it is normal for mathematics to be learned in a way which is somewhat alien. Conventionally, mathematical concepts need to be defined and computed before any rich sense of the concept is constructed. For example, pupils painstakingly learn, year after year, how to draw each and every type of graph and often fail to appreciate interpretation and other uses for graphs. Pupils learn how to calculate mean, mode and median and their separate definitions without necessarily understanding how the notion of average might be constructively employed towards a specific aim.

CONCLUSIONS
During inception, mathematical conceptual knowledge tends to live in a world disconnected from life in general and indeed from other mathematical concepts. Only much later might those connections begin to emerge and might the scope and relevance of the concepts take on meaning. Harel & Papert (1991) argue that this presentation of mathematics is an inversion of what happens when most non-mathematical ideas are encountered. In life outside of mathematics, people tend to learn through using.
According to their Power Principle, technology provides a means of inverting that inversion by phenomenalising (Pratt 1998) mathematical ideas into quasi-concrete objects that can be manipulated on-screen as if they were everyday phenomena. In these circumstances, it is possible to use mathematical concepts (or at least on-screen representations of them) without having already struggled with their definitions and computational methods. For example, pedagogic techniques can exploit the graphing capabilities of spreadsheets so that emphasis is removed from computation and low level skills and increased with respect to high level skills such as interpretation. A particular case can be found in the use of active graphing where a pedagogic technique encourages the pupils to construct an analytical utility for graphs, in addition to the conventional presentational utility (Ainley, et al. 2001).
Conjecture 4 is fundamentally tied up with the Power Principle. Epistemological analysis of a learning domain can identify likely key mathematical or statistical concepts. Psychopedagogical analysis can identify unconventional representations, which connect with the intuitive strata of existing knowledge amongst the target pupil population and provide them as controls within the virtual domain of abstraction. Through engagement with those controls in a purposeful task, the pupils can begin to construct the control as a representation.
So, in the case of ChanceMaker, the concept of distribution was phenomenalised to become the Workings box, which first acted as a control over the behaviour of the gadget but later also became a representation of proportions of outcomes, in much the same way as experts would think about distribution. Harel & Papert (1991) have shown us how digital technology is especially well-suited for the task of phenomenalising mathematical representations so that they can be used as controls. It is not so clear that this trick can be easily performed in more conventional settings.
In conclusion, we have summarised one instance of how reflection on software design can lead to the identification of design heuristics. In this case, four such heuristics were identified.
The scope of those heuristics to contexts involving older pupils, statistical or even mathematical learning, and teaching through the uses of technology are not yet well understood. However, we have discussed the opportunities and difficulties that might be encountered in extending the heuristics to designers in other operational settings, such as teachers in classrooms.