- Industriell kvalitetskontoll » Bokklubben
- Continuous Improvement, Probability, and Statistics
- Shop by category
- 1st Edition

We will send you an SMS containing a verification code. Please double check your mobile number and click on "Send Verification Code". Enter the code below and hit Verify. Free Shipping All orders of Cash on Delivery Pay for your order in cash at the moment the shipment is delivered to your doorstep. Log In. Don't have an account? Sign Up. Update your profile Let us wish you a happy birthday! Add your birthday.

Buy it Again. Make sure to buy your groceries and daily needs Buy Now. Let us wish you a happy birthday! Date of Birth. This need for probability literacy has been recognized by educational authorities in many countries by including probability in the curricula at different educational levels and in the education of teachers.

However, including a topic in the curriculum does not automatically assure its correct teaching and learning; the specific characteristics of probability, such as a multifaceted view of probability or the lack of reversibility of random experiments, are not usually found in other areas and will create special challenges for teachers and students. Furthermore, several books and major handbook chapters listed in the Further Readings section suggest the relevance of this field and the need to re formulate a research agenda in this area for the coming years.

Research in probability education has a fairly long history and includes theoretical analyses and empirical research on a variety of topics and from different perspectives, as described in the next sections. As we reviewed the existing literature on probability education, several major themes came to the fore.

These themes have been used to organize a brief review of our current understanding of probability education that informs the discussions for the ICME topic study group. Research in any area of mathematics education should be supported by an epistemological reflection about the objects that are being investigated. This reflection is especially relevant when focusing on probability, where different approaches to the concept that influence both the practice of stochastics and the school curricula are still being debated in the scientific community.

According to Hacking , probability was conceived from two main, albeit different, perspectives since its emergence. A statistical side of probability is related to the need to find the objective mathematical rules that govern random processes; probability values are assigned through data collected from surveys and experiments.

Complementary to this vision, an epistemic side views probability as a personal degree of belief, which depends on information available to the person assigning a probability. Currently, the main primary interpretations are intuitive, classical, frequentist, subjective, logical, propensity, and axiomatic.

Each of these views entails some philosophical issues and is more suited to model particular real-world phenomena or to be taken into account in curricula for specific students. In the next sections we briefly summarise the main features of the aforementioned views of probabilities, part of which have been introduced in school curricula. The theory of probability is, in essence, a formal encapsulation of intuitive views of chance that lead to the fundamental idea of assigning numbers to uncertain events.

According to David , cubic dice were abundant in primitive cultures e. Interestingly, the development of the theory of probability is much more recent with, according to David , no clear reasons to explain this delay. These intuitive ideas can be used by a teacher to help children develop a more mature understanding and use probability as a tool to compare likelihood of different events in a world filled with uncertainty. The earlier theoretical progress in probability theory was linked to games of chance such as throwing dice. It is not surprising that the initial formalization of this concept was based on an assumption that all possible elementary events were equiprobable, since this hypothesis is reasonable in many chance games.

In the classical definition of probability, given by Abraham de Moivre in in Doctrine of Chances and later refined by Laplace in in his Philosophical Essay on Probability , probability is simply a fraction of the number of favourable cases to a particular event divided by the number of all cases possible. This definition has been widely criticised since its publication, since the assumption of equiprobability of outcomes is subjective and it impedes the application of probability to a broad variety of natural phenomena where this assumption may not be valid.

The convergence of relative frequencies for the same event to a constant value after a large number of independent identical trials of a random experiment has been observed by many authors. Since such an empirical tendency is visible in many natural phenomena, this particular definition of probability extended the range of applications enormously. A practical drawback of this frequentist view is that we only obtain an estimation of probability that varies from one series of repetitions of experiments called samples to another. Moreover, this approach is not appropriate when it is not possible to repeat an experiment under exactly the same conditions Batanero et al.

Consequently, it is important to make clear to students the difference between a theoretical model of probability and the frequency data from reality used to create a model of probability. Sometimes this difference is not made explicit in the classroom and may confuse students who need to use abstract knowledge about probability to solve concrete problems from real life. Popper introduced the idea of propensity as a measure of the tendency of a random system to behave in a certain way and as a physical disposition to produce an outcome of a certain kind.

In the long run, propensities are tendencies to produce relative frequencies with particular values, but the propensities are not the probability values themselves Gillies For example, a cube-shaped die has an extremely strong tendency i. In single-case theory e.

## Industriell kvalitetskontoll » Bokklubben

Again this propensity interpretation of probability is controversial. In the long-run interpretation, propensity is not expressed in terms of other empirically verifiable quantities, and we then have no method of empirically finding the value of a propensity. With regards to the single-case interpretation, it is difficult to assign an objective probability for single events Gillies It is also unclear whether single-case propensity theories obey the probability calculus or not.

Researchers such as Keynes and Carnap developed the logical theories of probability, which retain the classical idea that probabilities can be determined a priori by an examination of the space of possibilities; however, the possibilities may be assigned unequal weights. In this view, probability is a degree of implication that measures the support provided by some evidence E to a given hypothesis H. Between certainty 1 and impossibility 0 , all other degrees of probability are possible.

This view amplifies deductive logic, since implication and incompatibility can be considered as extreme cases of probability. Carnap constructed a formal language and defined probability as a rational degree of confirmation. The degree of confirmation of one hypothesis H, given some evidence E , is a conditional probability and depends entirely on the logical and semantic properties of H and E and the relations between them. Therefore, probability is only defined for the particular formal language in which these relations are made explicit. Another problem in this approach is that there are many possible confirmation functions, depending on the possible choices of initial measures and on the language in which the hypothesis is stated.

Following this interpretation, some mathematicians e. However, the status of the prior distribution in this approach was criticised as subjective, even if the impact of the prior diminishes by objective data, and de Finetti proposed a system of axioms to justify this view in In this subjectivist viewpoint, the repetition of the same situation is no longer necessary to give a sense to probability, and for this reason the applications of probability entered new fields such as politics and economy, where it is difficult to assure replications of experiments.

## Continuous Improvement, Probability, and Statistics

Today the Bayesian approach to inference, which is based in this approach, is quickly gaining further traction in numerous fields. Despite the strong philosophical discussion on the foundations, the applications of probability to all sciences and sectors of human activity expanded very quickly. Throughout the 20th century, different mathematicians tried to formalise the mathematical theory of probability.

The set S of all possible outcomes of a random experiment is called the sample space of the experiment. However, the interpretation of what is a probability would differ according to the perspective one adheres to; the discussion about the meanings of probability is still very much alive in different approaches to statistics.

The above debates were, and are, reflected in school curricula, although not all the approaches to probability received the same interest. Before , the classical view of probability based on combinatorial calculus dominated the secondary school curriculum in countries such as France Henry Since this view relies strongly on combinatorial reasoning, the study of probability, beyond very simple problems, was difficult for students.

The axiomatic approach was also dominant in the modern mathematics era because probability was used as a relevant example of the power of set theory. However, in both the classical and axiomatic approaches, multiple applications of probability to different sciences were hidden to students. Today, with the increasing interest in statistics and technology developments, the frequentist approach is receiving preferential treatment. An experimental introduction of probability as a limit of relative frequencies is suggested in many curricula and standards documents e.

At the primary school level, an intuitive view, where children start from their intuitive ideas related to chance and probability, is also favoured. The axiomatic approach is not used at the school level, being too formal and adequate only for those who follow studies of pure mathematics at the post-secondary level. More details of probability contents in the school curricula will be discussed in Sect.

The recent emphasis on the frequentist view and on informal approaches in the teaching of inference may lead to a temptation to reduce teaching probability to the teaching of simulations—with little reflection on probability rules. However, as described by Gal , probability knowledge and reasoning is needed in everyday and professional settings for all citizens in decision-making situations e. Moreover, when considering the training of scientists or professionals e. Consequently, designing educational programmes that help develop probability knowledge and reasoning for a variety of students requires the description of its different components.

While there is an intense discussion on the nature of statistical thinking and how it differs from statistical reasoning and statistical literacy e. Below we describe some points to advance future research on this topic. Analyse conditions of such events and derive appropriate modelling assumptions;. Construct mathematical models for stochastic situations and explore various scenarios and outcomes from these models; and. An important step in any application of probability to real-world phenomena is modelling random situations Chaput et al.

Probability models, such as the binomial or normal distribution, supply us with the means to structure reality: they constitute important tools to recognise and to solve problems. Probability-related knowledge relevant to understanding real-life situations includes concepts such as conditional probabilities, proportional reasoning, random variables, and expectation.

- Research on Teaching and Learning Probability | SpringerLink;
- About this product.
- Recently Viewed.
- Inferring From Data!
- Theory of Convex Bodies!
- Learner-Centered Teaching Activities for Environmental and Sustainability Studies;
- Unified Business Laws for Africa: Common Law Perspectives on Ohada.

It is also important to be able to critically assess the application of probabilistic models of real phenomena. Since today an increasing number of events are described in terms of risk, the underlying concepts and reasoning have to be learned in school, and the understanding of risk by children should also be investigated Martignon ; Pange and Talbot Probabilistic reasoning is different from reasoning in classical two-valued logic, where a statement is either true or false.

Probabilistic reasoning follows different rules than classical logic. Furthermore, the field of probability is replete with intuitive challenges and paradoxes, while misconceptions and fallacies are abundant Borovcnik and Kapadia b. These counterintuitive results also appear in elementary probability, while in other areas of mathematics counterintuitive results only happen when working with advanced concepts Batanero ; Borovcnik For example, it is counterintuitive that obtaining a run of four consecutive heads when tossing a fair coin does not affect the probability that the following coin flip will result in heads i.

Probability utilises language and terminology that is demanding and is not always identical to the notation common in other areas of mathematics e. Yet, probability provides an important thinking mode on its own, not just a precursor of inferential statistics. The important contribution of probability to solve real problems justifies its inclusion into school curriculum. Another component of probabilistic reasoning is distinguishing between causality and conditioning. Although independence is mathematically reduced to the multiplicative rule, a didactical analysis of independence should include discussion of the relationships between stochastic and physical independence and of psychological issues related to causal explanation that people often relate to independence Borovcnik While dependence in probability characterises a bi-directional relation, the two directions involved in conditional probabilities have a completely different connotation from a causal standpoint.

For example, whereas the conditional probability of having some virus to having a positive result on a diagnostic test is causal, the backward direction of conditional probability from a positive diagnosis to actually having the virus is merely indicative. Alternatively stated, while the test is positive because of a disease, no disease is caused by a positive test result. In many real-life situations the causal and probabilistic approach are intermingled.

Often we observe phenomena that have a particular behaviour due to some causal impact factors plus some random perturbations. Then the challenge, often attacked with statistical methods, is to separate the causal from the random influence. A sound grasp of conditional probabilities is needed to understand all these situations, as well as for a foundation for understanding inferential statistics.

Another key element in probabilistic reasoning is discriminating random from causal variation. Variability is a key feature of any statistical data, and understanding of variation is a core element of statistical reasoning Wild and Pfannkuch However, whereas variation of different samples from the same population or process e. Besides, the larger the size of the individual variation, the smaller the amount of variation that can be attributed to systematic causes.

Random and causal sources of variation are complementary to each other, as they are considered in probability models used in statistical decision processes. Consider, for example, the problem of deciding whether expected values of two random variables differ. Several realisations of each of the two single variables will not be identical and most likely the empirical means will not be equal.

Based on a sample of realisations of each random variable, we perform an analysis that leads to the classical two-sample statistical test. Statistical inference based on probabilistic reasoning provides methods and criteria to decide, with a margin of error, when the observed differences are due to random or causal variation. It may be surprising, and from an epistemological point of view is far from obvious, that patterns of variation in careful measurements or in data of many individuals can be described by the same type of mathematics that is used to characterise the results of random experiments.

Indeed, it is here where data and chance i. However, the above is not obvious for some students, who may reveal a prevailing inclination to attribute even small variation in observed phenomena to deterministic causes. A perspective of losing weight as a noisy process may solve the problem for the student: sticking to a particular diet plan may have an influence on body weight over time, described by a deterministic function which, however, is affected by individual, unforeseen, and unpredictable random influences.

Wild and Pfannkuch state that people have a strong natural tendency to search for specific causes. Konold has accounted for this tendency in his outcome approach. This tendency is, in particular, visible in secondary school students, whose adherence to a mechanistic-deterministic view of the world is well documented and does not seem to fade with increasing years of schooling Engel and Sedlmeier To conclude this section we remark that probabilistic reasoning is closely related to, and yet different from, statistical reasoning.

Statistics can be portrayed as the science of learning from data Moore At first glance it may be surprising to recognize that data from Latin datum , the given can be connected with randomness as the unforeseen. The outcome of a random experiment is uncertain. How is it possible to associate measurement readings collected in a concrete physical context with the rather metaphysical concept of randomness, which even cannot be defined in exact mathematical terms?

While probabilistic reasoning aims at structuring our thinking through models, statistical reasoning tries to make sense of observed data by searching for models that may explain the data. Probabilistic reasoning usually starts with models, investigates various scenarios and attempts to predict possible realizations of random variables based on these models.

The initial points of statistical reasoning are data, and suitable models are fitted to these data as a means to gain insight into the data-producing process. Their full power for advancing human knowledge comes to bear only in the synthesis acknowledging that they are two sides of the same coin. The described need to understand random phenomena and to make adequate decisions when confronted with uncertainty has been recognised by many educational authorities.

Consequently, the teaching of probability is included in curricula in many countries during primary or secondary education.

### Shop by category

An important area of research in probability education is the analysis of curricular guidelines and curricular materials, such as textbooks. Both topics are now commented on in turn. Describe events as likely or unlikely and discuss the degree of likelihood using such words as certain, equally likely, and impossible;. Predict the probability of outcomes of simple experiments and test the predictions;.

### 1st Edition

Understand that the measure of the likelihood of an event can be represented by a number from 0 to 1. Their recommendations have been reproduced in other curricular guidelines for Primary school. Today, some curricula include probability from the first or second levels of primary education e. In the case of Mexico, for example, probability was postponed to the middle school level on the argument that primary school teachers have many difficulties in understanding probability and therefore are not well prepared to teach the topic.

This change does not take into account, however, the relevance of educating probabilistic reasoning in young children, which was emphasised by Fischbein , or the multiple connections between probability and other areas of mathematics as stated in the Guidelines for Assessment and Instruction in Statistics Education GAISE for pre-K levels Franklin et al. It is a part of mathematics that enriches the subject as a whole by its interactions with other uses of mathematics.

Understand and use appropriate terminology to describe complementary and mutually exclusive events;. Use proportionality and a basic understanding of probability to make and test conjectures about the results of experiments and simulations;. Compute probabilities for simple compound events, using such methods as organised lists, tree diagrams, and area models. Understand the concepts of sample space and probability distribution and construct sample spaces and distributions in simple cases;. Compute and interpret the expected value of random variables in simple cases;.

Understand the concepts of conditional probability and independent events;. In Mexico, there are different high school strands; in most of them a compulsory course in probability and statistics is included. In France, the main statistical content in the last year of high school terminale , year-olds is statistical inference, e.

Research shows the coexistence of different interpretations as well as misconceptions held by students and suggests the need to reinforce understanding of randomness in students Batanero Events and sample space. Some children only concentrate on a single event since their thinking is mainly deterministic Langrall and Mooney It is then important that children understand the need to take into account all different possible outcomes in an experiment to compute its probability. Combinatorial enumeration and counting. Combinatorics is used in listing all the events in a sample space or in counting without listing all its elements.

Although in the frequentist approach we do not need combinatorics to estimate the value of probability, combinatorial reasoning is nevertheless needed in other situations, for example, to understand how events in a compound experiment are formed or to understand how different samples of the same size can be selected from a population. Combinatorial reasoning is difficult; however, it is possible to use tools such as tree diagrams to help students reinforce this particular type of reasoning.

Independence and conditional probability. The notion of independence is important to understand simulations and the empirical estimates of probability via frequency, since when repeating experiments we require independence of trials. Computing probabilities in compound experiments requires one to analyse whether the experiments are dependent or not. Finally, the idea of conditional probability is needed to understand many concepts in probability and statistics, such as confidence intervals or hypotheses tests. Probability distribution and expectation.

Although there is abundant research related to distribution, most of this research concentrates on data distribution or in sampling distribution. Another type of distribution is linked to the random variable, a powerful idea in probability, as well as the associated idea of expectation. Some probability distribution models in wide use are the binomial, uniform, and normal distributions. Convergence and laws of large numbers. The progressive stabilization of the relative frequency of a given outcome in a large number of trials has been observed for centuries; Bernoulli proved the first version of the law of large numbers that justified the frequentist definition of probability.

Today the frequentist approach, where probability is an estimate of the relative frequency of a result in a long series of trials, is promoted in teaching. It is important that students understand that each outcome is unpredictable and that regularity is only achieved in the long run. At the same time, older students should be able to discriminate between a frequency estimate a value that varies and probability which is always a theoretical value Chaput et al. Sampling and sampling distribution.

Given that we are rarely able to study complete populations, our knowledge of a population is based on samples. Students are required to understand the ideas of sample representativeness and sampling variability. The sampling distributions describe the variation of a summary measure e. Instead of using the exact sampling distribution e. This is a suitable teaching strategy, but teachers should be conscious that, as any estimate, the empirical sampling distribution only approximates the theoretical sampling distribution.

Modelling and simulation. Today we witness increasing recommendations to approach the teaching of probability from the point of view of modelling Chaput et al. Simulation allows the exploration of probability concepts and properties, and is used in informal approaches to inference.

Simulation acts as an intermediary step between reality and the mathematical model. When teaching probability it is important to take into account the informal ideas that children and adolescents assign to chance and probability before instruction. These ideas are described in the breadth and depth of research investigating probabilistic intuitions, informal notions of probability, and resulting learning difficulties.

We now revisit the essentials associated with probabilistic intuition and difficulties associated with learning probability. Initial research in probability cognition was undertaken during the s and s by Piaget and Inhelder and by psychologists with varying theoretical orientations Jones and Thornton Alternatively stated, research investigating intuition and learning difficulties was central at the beginnings of research in probabilistic thinking and would continue on into the next historical phase of research.

The work of Fischbein would continue the work of Piaget and Inhelder i. As mentioned, other investigations involving intuition were occurring in the field of psychology during this period, using different terminology. Their research revealed numerous heuristics e. This research program played a key role in shaping many other fields of research see, for example, behavioural economics. In the field of mathematics education, the research of Shaughnessy , brought forth not only the theoretical ideas of Tversky and Kahneman, but also, in essence, research on probabilistic intuitions and learning difficulties.

Although not explicitly deemed as intuitions and difficulties, work in this general area of research was conducted by a number of different individuals. As the Post-piagetian Period came to a close, the field of mathematics education began to see an increasing volume of research on intuitions and learning difficulties e. Moving from one period to the next, research into probabilistic intuitions and learning difficulties would come into its own during what Jones called Phase Three : Contemporary Research.

During this new phase there was, arguably, a major shift towards investigating curriculum and instruction, and the leadership of investigating probabilistic intuitions and learning difficulties was carried on by a particular group of researchers. Worthy of note, mathematics education researchers in this phase, as the case with Konold , and Falk in the previous phase, began to develop their own theories, frameworks, and models associated with responses to a variety of probabilistic tasks.

These theories, frameworks, and models were developed during research that investigated a variety of topics in probability, which included difficulties associated with : randomness e. Worthy of note, the term misconceptio n, which acted as the de facto terminology for a number of years, has more recently evolved to preconceptions and other variants, which are perhaps better aligned with other theories in the field of mathematics education.

In line with the above, research developing theories, models, and frameworks associated with intuition and learning difficulties continued into the next phase of research, which Chernoff and Sriraman a have prematurely called the Assimilation Period. Gone are the early days where researchers were attempting to replicate research found in different fields, such as psychology e.

With that said, researchers are attempting to import theories, models, and frameworks from other fields; however, researchers in the field of mathematics education are forging their own interpretations of results stemming from the intuitive nature and difficulties associated with probability thinking and the teaching and learning of probability. Theories, models, and frameworks such as inadvertent metonymy Abrahamson , , sample space partitions Chernoff , a , b , and others demonstrate that research into intuitions and difficulties continues in the field of mathematics education.

This does not mean, however, that the field does not continue to look to other domains of research to help better inform mathematics education. For example, recent investigations e. Similar investigations embracing research from other fields have opened the door to alternative views of heuristics, intuitions, and learning difficulties, such as in the work by Gigerenzer and the Adaptive Behavior and Cognition ABC Group at the Max Planck Institute for Human Development in Berlin e. Based on these developments, the field of mathematics education is starting to also develop particular research which is building upon and questioning certain aspects of probabilistic intuitions and learning difficulties.

For example, Chernoff, in a recent string of studies e. In considering how students reason about probability, advances in technology and other educational resources have allowed for another important area of research, as described in the next section. Many educational resources have been used to support probability education. Some of the most common resources include physical devices such as dice, coins, spinners, marbles in a bag, and a Galton board that help create game-like scenarios that involve chance Nilsson These devices are often used to support a classical approach to probability for computing the probability of an event occurring a priori by examining the object and making assumptions about symmetry that often lead to equiprobable outcomes for a single trial.

When used together e.

Organizational tools such as two-by-two tables and tree diagrams are also used to assist in enumerating sample spaces Nunes et al. Since physical devices can also be acted upon, curriculum resources and teachers have increased the use of experiments with these devices to induce chance events e. These frequencies and relative frequencies are used as an estimate of probability in the frequentist perspective, then often compared to the a priori computed probability based on examination of the object. Of course, if the modelling meaning of probability was stressed in the curriculum, it is debatable whether there is much advantage in maintaining the current emphasis on coins, spinners, dice and balls drawn from a bag.

Perhaps, in days gone by when children played board games, there was some natural relevance in such contexts but, now that games take place in real time on screens, probability has much more relevance as a tool for modelling computer-based action and for simulating real-world events and phenomena p. One way to help students use probability to model real-world phenomena is to engage the necessity to make a model explicit when using technology.

Sampling, storing, organising, and analysing data generated from a probabilistic model are facilitated tremendously by technology. These recommendations have been used by many researchers and have recently been made explicit for recommendations for teachers by Lee and Lee and for researchers by Pratt and Ainley and Pratt et al. Bill Hooper is an independent consultant specializing in data based productivity and quality improvement for small and mid-sized companies.

Bill has implemented over Designed Experiments for multiple industrial and service based clients over the past 20 years, but likely is best known for teaching a series of innovative courses on data, statistics and Design of Experiments throughout the United States, Canada, Africa and the Middle East. Bill is also a trained close-up magician and a performing juggler best known for creating with his son, Todd Hooper, the workshop, "Juggling for Creativity and teamwork".

The use of juggling as a training method for continuous improvement is from that workshop and also from teaching hundreds of children and adults to juggle. Bill previously volunteered for the Chicago area non-profit Corporation Open Heart Magic, an organization that specializes in the use of close-up magic to accelerate the healing process for hospitalized pediatric patients.