Random experiment. Random Events

§1. What does probability theory study and when did it arise? The concept of a random experiment. Space of elementary outcomes. Types and examples. Elements of combinatorics. The concept of an event.

Historical reference:

Historically, probability theory arose as a theory gambling(roulette, dice, cards, etc.). at the end of the 17th century. The beginning of its development is associated with the names of Pascal, Bernoulli, Moivre, Laplace, and later (early 19th century) Gauss and Poisson.

The first studies on probability theory in Russia date back to the mid-19th century and are associated with the names of such outstanding mathematicians as N.I. Lobachevsky, M.V. Ostrogradsky, V.Ya. Bunyakovsky (one of the first to publish a textbook with applications in insurance and demography).

The further development of probability theory (late 19th and twenties of the 20th century) is mainly associated with the names of Russian scientists Chebyshev, Lyapunov and Makarov. Since the 30s of the 20th century, this branch of mathematics has experienced a flourishing period, finding applications in various fields of science and technology. At this time, Russian scientists Bernstein, Khinchin and Kolmogorov made a significant contribution to the development of probability theory. It was Kolmogorov, at the age of 30 in 1933, who proposed the axiomatic construction of probability theory, establishing its connection with other branches of mathematics (set theory, measure theory, functional analysis).

Probability theory is a branch of mathematics that studies mathematical models of random experiments, i.e. experiments, the outcomes of which cannot be determined unambiguously by the conditions of the experiment. It is assumed that the experiment itself can be repeated (at least in principle) any number of times under an unchanged set of conditions, and the results of the experiment are statistically stable.

The concept of a random experiment

Examples of random experiments:

1. Toss a coin once.

2. Toss the dice once.

3. Random selection of a ball from the urn.

4. Measuring the uptime of a light bulb.

5. Measuring the number of calls arriving at the PBX per unit of time.

An experiment is random if it is impossible to predict the outcome of not only the first experiment, but also all further. For example, some chemical reaction is carried out, the outcome of which is unknown. If it is carried out once and a certain result is obtained, then with further experimentation under the same conditions, the randomness disappears.

You can give as many examples of this kind as you like. What is the commonality of experiments with random outcomes? It turns out that despite the fact that it is impossible to predict the results of each of the above experiments, in practice a certain type of pattern has long been noticed for them, namely: when conducting a large number of tests observed frequencies occurrence of each random event are stabilized, those. differ less and less from a certain number called the probability of an event.

The observed frequency of event A () is the ratio of the number of occurrences of event A () to the total number of trials (N):

For example, when tossing a fair coin, the fraction

at

(
-number of eagles, N– total number of throws)

This property of frequency stability allows, without being able to predict the outcome of a single experiment, to accurately predict the properties of phenomena associated with the experience in question. Therefore, the methods of probability theory in modern life have penetrated into all spheres of human activity, not only in the natural sciences, economics, but also in the humanities, such as history, linguistics, etc. Based on this approach statistical determination of probability.

at
(the observed frequency of an event tends to its probability as the number of experiments increases, that is, with n
).

Definition 1.1: An elementary outcome (or an elementary event) call any simplest (i.e. indivisible within the framework of a given experience) outcome of an experiment. We will call the set of all elementary outcomes space of elementary outcomes.

An example of constructing a space of elementary outcomes:

Let's consider the following random experiment: throwing a dice once, observing the number of points dropped on the top side. Let us construct a space of elementary outcomes for it:

Contains all options, the appearance of each option excludes the appearance of the others, all options are indivisible.

Space of elementary outcomes (types and examples for each type):

Consider the following diagram

Discrete spaces– these are spaces in which individual outcomes can be distinguished . In discrete finite you can accurately indicate their number.

Examples of discrete spaces of elementary outcomes

    Experiment:single coin toss

, Where

Can be included in the production of e.i. option of a coin falling on its edge, but we exclude it from the model as unlikely (each model is some approximation)

If the coin is correct, i.e. Since it has the same density everywhere and an undisplaced center of gravity, then the outcomes “coat of arms” and “tails” have equal chances of appearing. If the coin’s center of gravity is shifted, then, accordingly, the outcomes have different chances of occurring.

Comment: If a problem says nothing about a coin, then it is assumed to be correct.

    Experiment:a single toss of two coins.

Note: If the coins are the same, then the RG and GR outcomes are visually indistinguishable. You can mark one of the coins with paint and then they will be visually different.

The model can be built in different ways:

or we distinguish between the outcomes of RG, GR and then we get 4 vars

, Where

In this case, if both coins are correct, all options have an equal chance of appearing.

or we do not distinguish between the RG and GR options and then we end up with 3 options.

, Where

In this case, if both coins are correct, the RG option has a greater chance of appearing than the GG and RR options, because it is implemented in two ways: a coat of arms on the first coin and a tail on the second and vice versa.

    Experiment: random selection from a group of students consisting of 20 people, 5 person to travel to a conference. Experiment result: specific five. When choosing, we only care about the composition, i.e. it doesn’t matter who we chose first, who we chose second, etc. Wherein

(how many “fives” of different composition can be obtained from 20 people) (factorial)

The answer to this question is again given by the science of combinatorics.

(

All 15504 options have an equal chance of appearing, because the choice is random.

    Experiment: random selection from a group of students consisting of 20 people, 5 people to receive bonuses of varying amounts. Experiment result: a specific ordered quintuplet. When choosing, not only the composition is important to us, but also the order of selection, because The size of the bonus depends on how the person is selected.

1860480 ( how many ordered different “fives” can be obtained from 20 people).

The answer to this question is again given by the science of combinatorics.

(

All 1860480 options have equal chances of appearing, because the choice is random.

It is clear that there will be more ordered “fives” than unordered ones, because with the same composition there can be several order options: in this case, in each composition of 5 people there are possibly 120 various options order.

ELEMENTS OF COMBINATORICS

Generalized multiplication rule:

Let it be necessary to commitmindependent actions, and the first action can be performed ways, the second - ways, etc. ….m-th action
ways. Then the entire sequence of actions can be carried out

ways

Rearrangements.

Permutation fromnelements any ordered set of these elements is called.

-number of permutations of n elements

Explanation: The first element can be selected in n ways, the second in n-1 ways, etc. the last element is done in one way, and they are multiplied based on the rule of generalized multiplication

Placements.

Accommodation fromnBym called any ordered set from m elements chosen randomly from population containing n elements (m

The number of placements of n elements by m (the number of options for such an ordered choice).

Explanation: The first element can be selected in n ways, the second in n-1 ways, etc. , and they are multiplied based on the rule of generalized multiplication.

Combinations.

A combination ofnBym called any unordered set of m elements selected randomly from a population containing n elements.

Combinations and placements are related as follows:

(for each composition of m elements we have m! ordered sets). Thus,

the number of combinations of n elements of m (the number of options for such an unordered choice

Example of a continuous space of elementary outcomes

Experiment: two people make an appointment at a certain place between 12 and 13 o'clock, and each of them can come within this time at any random moment. We track the moments of their arrival. Each option for 2 people to arrive is a point from a square with a side of 60 (since there are 60 minutes in an hour).

(the first one can arrive at 12 o'clock x minutes, the second at 12 o'clock y minutes). All the points in the square cannot but be counted and renumbered. This is its continuous structure and, therefore, in this experiment the continuous space of elementary outcomes.

Events and operations on them:

Definition 1.2

Any a set of elementary outcomes is called an event. WITH events are indicated by large with Latin letters A, B, C or letters with indices A 1, A 2, A 3, etc.

The following terminology is often used: they say that event A has occurred (or has occurred) if any of the elementary outcomes appeared as a result of the experience
.

Examples of events

Let's return to the experiment of tossing a die. Consider the following events:

A=(rolling an even number of points)

B=(rolling an odd number of points)

C=(rolling a number of points that is a multiple of 3)

Then, according to the notation introduced earlier,



Definition 1.3

An event consisting of all elementary outcomes, i.e. an event that necessarily occurs in a given experience is called reliable. It is designated
as well as the space of elementary outcomes.

Example of a reliable event: When throwing a dice, no more than 6 points will appear, or when throwing a dice, at least one point will appear.

Definition 1.4

An event that does not contain a single elementary outcome, i.e. an event that never occurs in a given experience is called impossible. It is designated by the symbol .

Example of an impossible event: When throwing two dice, the total number of points rolled will be 20.

Operations on events:



phrase, at least one of the events A or B occurred).


Definition 1.5 Events A and B are called incompatible, if their intersection is an impossible event, i.e. AB= .

An example of a task on operations on events:

Three shots are fired at the target. Consider the events

(Hit with the i-th shot), i=1..3

Using set-theoretic operations, express the following events in terms of events A i:

A=(three hits)=

B=(three misses)=

C=(at least one hit)=

D=(at least one miss)=

E=(at least two hits)=
+
+
+

F=(no more than one hit)=
++
+

G=(hit the target no earlier than the third shot)=

Idea: next there will be tasks of this type: probabilities of events are given and it is required, knowing these probabilities, to find the probabilities of events A, B, C, D, E, F, G

§2. THE CONCEPT OF PROBABILITY

To quantitatively compare the chances of events occurring, the concept of probability is introduced.

Definition 2.1 Let every event A delivered in accordance number P(A). The numerical function P is called probability or probability measure, if it satisfies the following axioms:

Axiom of non-negativity

Axiom of normalization

Axiom of addition (extended) some are being studied random event ...

  • Document

    A new one has been added type errors - not enough elements. As a result of the experiments found out What children suffering from... specific examples. Studying the nature of the influence on the voluntary attention of special education children elementary ...

  • Educational program of basic general education of the Municipal budgetary educational institution

    Educational program

    Results ( outcomes) protozoa random experiments; find probabilities protozoa random events; ... Elements logic, statistics,

  • Basic Concepts 1 TV


    Basic concepts (part 1) of the course Probability Theory

    1. Random experiment model.

    2. Events (random events) and their properties.

    3. Probability and its properties.

    4. Conditional probability.

    5. Independence of events.

    6. Total probability formula.

    7. Bayes' formula.

    1. Random experiment model , probability space.
    A random experiment has the property statistical stability: tests could potentially be carried out unlimited amount times under identical conditions, with each test it is possible to record an elementary outcome that is clearly unpredictable in advance.

    The model of such an experiment is with agreed upon triplet of objects (Ω , A ,R):

    Ω = { ω } - space of elementary outcomes, the set of all possible elementary outcomes of an experiment . Different elementary outcomes do not intersect; they cannot occur simultaneously in an experiment.

    A = { A, B,...} - event class, a complete set of events that interest us .
    Each event is a certain subset of possible elementary outcomes of the experiment.

    R - probability measure events experiment .
    For every event A its probability is determined R(A), calculated according to a single rule .


    1. Event Properties :
    We say that an event occurred in the experiment A, if the experiment led to an elementary outcome included in A.

    Completeness event class A means:

    A) with every event A we are considering it too addition- an event consisting of all possible elementary outcomes of an experiment not included in the event A;

    B) together with any two events A And IN we are reviewing them Union
    , And intersection
    .

    Consequences:



    called reliable event, and called impossible event.

    If = , then events A And IN called incompatible.


    1. Properties of Probabilities :


    Methods for specifying a probability measure.

    • Classical probability. If
    a) Number of elements Ω of course ( Ω ), Ω  = n.

    B) All elementary outcomes events ( elementary events), ω A .

    C) The probabilities of all elementary events are equal ( uniform probability measure), R(ω ) = 1 / n .

    Then the probability of any event A is defined as the proportion of the number of elementary outcomes in A( A) on the number of elementary outcomes in Ω . R(A) =  AΩ  .


    • Geometric probability. If on the space of elementary outcomes Ω a finite non-negative measure is given s (· ), then the probability of any event A is defined as the ratio of the measure A,s (A), to the extent Ω , s (Ω ). R(A) = s (A) s (Ω ).

    • Distribution density. If
    A) Space of elementary outcomes number axis points ( Ω = R) or parts thereof.

    B) A non-negative function is given R (ω ), (R (ω ) 0 ), with area ( s (· )) figures V Ω , limited by schedule R (ω ) and number axis Ω , equal to 1 (s (V Ω ) = 1).

    A) Function R (ω ) is called distribution density.

    B) Probability of any event AΩ given by area s (V A) figure limited by the graph R (ω ) into parts A number axis and number axis Ω . R(A) = s (V A).


    1. Conditional probability .
    Probability of the event A, provided that the event occurred IN, (R(IN)>0 ) name the number [ R(AIN)⁄ R(IN)] and denote it as follows R IN (A) or R(AIN), that is:
    R IN (A)=R(AIN)=[ R(AIN)⁄ R(IN)] . Wherein 0 R IN (A) ≤ 1, because ( AIN) ⊆ B And R(IN)>0 .

    1. Independence of events .
    Events A And B are independent, If R(AIN) = R(A) · R(IN).

    Three events collectively independent If:
    a) every two of them are independent, and
    b) combining each two events independently with a third event.

    The concept of independence in the aggregate extends to a larger number of events in a similar way.


    1. Complete group of events .
    If events N 1 , N 2 ,… , N To,... are such that their union ( N 1 N 2 …N To…)=Ω and they are pairwise inconsistent (do not intersect), ( N iN j= Ø), then these events form full group of events.

    1. Total probability formula.
    If events N 1 , N 2 ,… , N To,... form full group of events, then it is fair total probability formula:

    R(A)) = i [P(N i)· R(AN i)].

    The probability of an event can be calculated as a weighted sum of the conditional probabilities of this event, provided that events from the full group of events occurred, where the probabilities of the corresponding events from the full group are taken as weighting coefficients.


    1. Bayes formula .
    If events N 1 , N 2 ,… , N To,... form full group of events, then it is fair Bayes formula For recalculation of the probabilities of events forming a complete group based on the results of the test in which the event occurred A.

    R A (N To) = (R(AN To)) (R(A)) = (R(AN To)) ⁄ ( i [P(N i)· R(AN i)]).


    1. Typical models of a random experiment.
    IN (p). Bernoulli model with parameterp, trial Bernoulli with parameterp, 0 p ≤1.
    An experiment with two alternative events - outcomes U(success) and N(failure).
    R(U) =p, R(N) =q = 1p.

    U(2). The simplest Urn model.

    Retrieving a ball from an urn with two balls. The model is equivalent to the Bernoulli model IN (½).

    U(n) or R(n). Classic Urn Model.

    Retrieving a ball from an urn n renumbered balls. Elementary outcome – elementary event – ​​number of the drawn ball. Classical probability with a uniform probability distribution of elementary events.

    U(n; m) . Urn model.
    Retrieving a ball from an urn m white and ( nm) black balls.
    The model is equivalent to the Bernoulli model IN (m / n).


    1. Sequence of random experiments .
    IN (n; p). Binomial model. n successive independent Bernoulli tests with the parameter p.

    U(n *n). Sequential extraction and return of two balls from an urn with n balls.

    U(2 * 2). Sequential extraction and return of two balls from an urn with two balls. The model is equivalent to the Binomial model IN (2; p).

    U(n *(n -1)). Consecutive extraction without returning two balls from an urn with n balls.

    In science and practice, there are three ways to test hypotheses. First consists in directly (directly) establishing the hypothesis put forward. This method in forensic practice can be applied to a relatively small group of predictive versions (investigative and search). Second path...
    (Forensics)
  • Probability distributions and expected returns
    As has been said more than once, risk is associated with the likelihood that the actual return will be lower than the expected value. Therefore, probability distributions are the basis for measuring the risk of an operation. However, it should be remembered that the estimates obtained are probabilistic in nature. Example...
    (Methods of making management decisions)
  • Qualitative and quantitative models for assessing the probability of bankruptcy
    Default risk, or credit risk, is the risk of failure to comply with the terms of a credit agreement or market transaction, primarily expressed in the borrower's failure to timely and in full fulfill the debt obligations assumed (for example, pay the agreed upon date...
    (The financial analysis for managers)
  • Wigner distribution on phase space and negative probability
    Even in non-relativistic quantum mechanics negative probabilities arise. Here it is impossible to introduce the probability distribution (Maxwell) of coordinates x and moments p, just as in statistical mechanics. This is impossible due to the uncertainty relationship, which prevents simultaneous measurement...
  • p-adic probability space
    Let R : A Qp - measure defined on a separable algebra A. subsets of the set 12, which satisfies the normalization condition /i(12) = 1. Let us set T = Afl and denote the continuation of the measure R for algebra F symbol R. Troika (12, J-. P) is called p-adic...
    (QUANTUM PHYSICS AND NON-KOLMOGOR PROBABILITY THEORIES)
  • REGRESSION. MATHEMATICAL PROCESSING OF EXPERIMENTAL RESULTS
    Statement of the problem of compiling empirical formulas Let's consider a problem similar to that given in section 4.1. Let us now conduct a study of the relationship between the number of visitors and sales volume in a supermarket for 10 days. In this case, a certain set of pairs of values ​​is obtained X- numbers...
    (NUMERICAL METHODS)
  • Mathematical expectation of a random function
    Consider the random function X(i). For a fixed argument value, for example when t = tv we get the cross section - a random variable X(t() with mathematical expectation M.(We assume that the mathematical expectation of any section exists.) Thus, every fixed...
    (THEORY OF PROBABILITY AND MATHEMATICAL STATISTICS)
  • CHAPTER 1 PROBABILITY THEORY

    Probability experiment. Subject and tasks of probability theory.

    The results of any experiment depend to one degree or another on the set of conditions S under which the experiment is carried out. These conditions either objectively exist or are created artificially (i.e., an experiment is planned).

    According to the degree of dependence of the results of an experiment on the conditions under which it was carried out, all experiments can be divided into two classes: deterministic and probabilistic.

    o Deterministic experiments- These are experiments whose results can be predicted in advance on the basis of natural science laws based on a given set of conditions S.

    An example of a deterministic experiment is the determination of the acceleration received by a body of mass m under the influence of a force F, i.e., the desired value is uniquely determined by a set of experimental conditions (i.e., the mass of the body m and the force F).

    Deterministic are, for example, all processes based on the use of the laws of classical mechanics, according to which the movement of a body is uniquely determined by given initial conditions and forces acting on the body.

    o Probabilistic experiments (stochastic or random) - experiments that can be repeated an arbitrary number of times subject to the same stable conditions, but, unlike a deterministic experiment, the outcome of a probabilistic experiment is ambiguous and random. Those. It is impossible to predict in advance the result of a probabilistic experiment based on a set of conditions S. However, if a probabilistic experiment is repeated many times under the same conditions, then the totality of the outcomes of such experiments obeys certain patterns. Probability theory is the study of these patterns (or rather, their mathematical models). Here are some examples probabilistic experiments, which in the future we will simply call experiments.

    Example 1

    Let the experiment consist of tossing a symmetrical coin once. This experiment can end in one of mutually exclusive outcomes: the coat of arms or the lattice (tails) falling out. If you know exactly the initial speeds of translational and rotational motion and the initial position of the coin at the moment of throwing, then you can predict the result of this experiment according to the laws of classical mechanics. Those. it would be deterministic. However, the initial data of the experiment cannot be fixed and are constantly changing. Therefore, they say that the result of the experiment is ambiguous, random. However, if we toss the same symmetrical coin repeatedly along a sufficiently long trajectory, i.e. if possible, if we keep certain conditions of the experiment stable, then the total number of its outcomes is subject to certain patterns: the relative frequency of the coat of arms falling out, the frequency of throws (n is the number of throws, m 1 is the number of the coat of arms falling out, m 2 is tails).

    Example 2

    Let's assume that we are filling out a sports lotto card. Before the winning draw, it is impossible to predict how many numbers will be guessed correctly. However, the experience of conducting sports lotto draws suggests that the average percentage of players who guessed m (1≤m≤6) numbers fluctuates around a certain constant value. These “patterns” (the average percentage of correctly guessing a given number of numbers) are used to calculate winning funds.

    Probabilistic experiments have the following common features: unpredictability of the result; the presence of certain quantitative patterns when they are repeated many times under the same conditions; many possible outcomes.

    o The subject of probability theory is a quantitative and qualitative analysis of mathematical models of probabilistic experiments, called static processing of experimental data.

    o Probability theory- the science that deals with the analysis of mathematical models for decision making under conditions of uncertainty.

    Events and operations on them.

    Relative frequencies and their properties

    The primary concept of probability theory, not defined through other concepts, is the space of elementary outcomes Ω. Usually, the only possible indecomposable results of an experiment are taken as the space of elementary outcomes.

    Example

    1. Suppose that a symmetrical coin is tossed. Then (coat of arms and tails).

    2. Dice .

    3. Two coins are tossed.

    4. Two dice are thrown. The number of elementary outcomes is 36.

    5. A point is thrown at random on the number axis w.

    6. Two points are thrown on.

    y

    Definition. Event is an arbitrary subset A of the space of elementary outcomes Ω. Those elementary outcomes that make up event A are called favorable event A.

    An event A is said to have occurred if, as a result of the experiment, an elementary outcome w A occurs, i.e. favorable event A.

    Let's look at example 2. , – an event consisting of an odd number of points; – an event consisting of an even number of points being rolled.

    o The entire space of elementary outcomes Ω, if taken as an event, is called reliable event, since it occurs in any experiment (always).

    o An empty set (i.e. a set that does not contain a single elementary outcome) is called impossible an event because it never happens.

    All other events, except Ω and , are called random.

    Operations on events

    0.1 Amount events A and B is called the union of these sets A B.

    – an event that occurs if and only if at least one of events A or B occurs.

    0.2 The work events A and B is called the intersection of the sets A and B, i.e. A B. Designated as AB.

    AB is an event when A and B occur simultaneously.

    0.3 By difference events A and B is called the difference of the sets A\B.

    A\B is an event that occurs<=>when A happens and B doesn't happen.

    o Events A and B are called incompatible, If . If A and B are incompatible, then we will denote .

    o Event A is said to entail event B if A is a subset of B, i.e. (when A happens, B happens).

    o The event is called opposite to event A.

    Example 2. . occurs when A does not occur.

    o They say that the events Н 1 , Н 2 ,…, Н n form a complete group, if Н 1 +Н 2 +…+Н n =Ω (i.e. Н 1 , Н 2 , Н n are incompatible, i.e. Н i Н j = if i≠j).

    For example, A and form a complete group: .

    Let us assume that some random experiment is carried out, the result of which is described by the space Ω. Let's perform N experiments. Let A be some event (), N(A) be the number of those experiments in which event A occurred.

    Then the number is called relative frequency of event A.

    Axioms of probability theory

    Let Ω be the space of elementary outcomes. Suppose that F is some class of subsets of Ω.

    o An event is a subset of Ω belonging to the class F. Each event is associated with a real number P(A), called probability A , so the axioms are satisfied:

    Axiom 1.

    Axiom 2.,those. the probability of a certain event is 1.

    Axiom 3.(countable additivity) If And , then (for incompatible events).

    Elements of combinatorics

    Lemma 1. From m elements a 1 ,…,a m of the first group and n elements b 1 ,…,b n of the second group, it is possible to compose exactly m∙n ordered pairs of the form (a i , b j ), containing one element from each group.

    Proof:

    In total we have m∙n pairs.

    Example. There are 4 suits in the deck (hearts, spades, clubs, diamonds), each suit has 9 cards. Total n=4∙9=36.

    Lemma 2. From n 1 elements of the first group a 1, a 2,…, and n 1,

    n 2 elements of the second group b 1, b 2,…, b n 2,

    n 3 elements of the k-th group x 1 , x 2 ,…, x nk

    it is possible to compose exactly n 1 ∙ n 2 ∙…∙n k different ordered combinations of the form containing one element from each group.

    1. For k=2, the statement is true (Lemma 1).

    2. Suppose that Lemma 2 holds for k. Let us prove for k+1 group of elements . Consider the combination How And . The assumption makes it possible to calculate the number of combinations of k elements, their n 1 n 2 n k . According to Lemma 1, the number of combinations of k+1 elements is n 1 n 2 … n k +1.

    Example. When throwing two dice N=6∙6=36. When throwing three dice N=6∙6∙6=216.

    Geometric probabilities

    Suppose that there is a certain segment on the number line and a point is thrown at random on this segment. Find the probability that this point will fall on .

    -geometric probability on a straight line.

    Let a plane figure g be part of a plane figure G. A point is thrown at random onto the figure G. The probability of a point falling into figure g is determined by the equality:

    -geometric probability on the plane.

    Let there be a figure v in space that is part of the figure V. A point is thrown at random onto the figure V. The probability of a point falling into figure v is determined by the equality:

    -geometric probability in space.

    The disadvantage of the classical definition of probability is that it does not apply to trials with an infinite number of outcomes. To eliminate this drawback, they introduce geometric probabilities.

    Properties of Probability

    Property 1. The probability of an impossible event is 0, i.e. . .

    Property 2. The probability of a reliable event is equal to 1, i.e. , .

    Property 3. For any event . , because , then and therefore .

    Property 4. If events A and B are incompatible, then the probability of the sum is equal to the sum of the probabilities:

    Random variables

    o Random variable X is a function X(w) that maps the space of elementary outcomes Ω in the set real numbers R.

    Example. Let a coin be tossed twice. Then .

    Let us consider the random variable X—the number of occurrences of the coat of arms on the space of elementary outcomes Ω. The set of possible values ​​of a random variable is: 2,1,0.

    w (g,g) (r,r) (p,g) (p,p)
    X(w)

    The set of values ​​of a random variable is denoted by Ω x. One of the important characteristics of a random variable is the distribution function of the random variable.

    o Distribution function of random variable X is called a function F(x) of a real variable x, which determines the probability that the random variable X will take, as a result of an experiment, a value less than a certain fixed number x.

    If we consider X as a random point on the x axis, then F(x) with geometric point In terms of perspective, this is the probability that a random point X as a result of the experiment will fall to the left of point x.

    The simplest flow of events.

    Let's consider events that occur at random times.

    o The flow of events call a sequence of events that occur at random times.

    Examples of flows are: calls arriving at a telephone exchange, at an emergency room medical care, arrival of planes at the airport, clients at the enterprise consumer services, sequence of failures of elements and many others.

    Among the properties that flows can have, we highlight the properties of stationarity, absence of consequences and ordinaryness.

    o The flow of events is called stationary, if the probability of occurrence of k events during a time period of duration t depends only on k and t.

    Thus, the property of stationarity is characterized by the fact that the probability of occurrence of k events at any time interval depends only on the number k and on the duration t of the interval and does not depend on the beginning of its counting; in this case, different time intervals are assumed to be disjoint. For example, the probabilities of occurrence of k events on time intervals (1, 7), (10, 16), (T, T+6) of the same duration t=6 time units are equal to each other.

    o The flow of events is called ordinary, if no more than one event can occur in an infinitely small period of time.

    Thus, the property of ordinaryness is characterized by the fact that the occurrence of two or more events in a short period of time is practically impossible. In other words, the probability of more than one event occurring at the same time is practically zero.

    o The stream of events is said to have the property no consequences, if there is mutual independence of the occurrences of one or another number of events in non-overlapping time intervals. Thus, the property of no consequences is characterized by the fact that the probability of the occurrence of k events at any time interval does not depend on whether events appeared or did not appear at points in time preceding the beginning of the period under consideration. In other words, the conditional probability of the occurrence of k events over any period of time, calculated under an arbitrary assumption about what happened before the beginning of the period in question (i.e., how many events appeared, in what sequence), is equal to the unconditional probability. Consequently, the flow's history does not affect the probability of events occurring in the near future.

    o The flow of events is called simplest or Poisson, if it is stationary, ordinary, without consequences.

    o Flow intensity λ is the average number of events that occur per unit time.

    If the constant intensity of the flow is known, then the probability of occurrence of k events of the simplest flow during a time period of duration t is determined by the formula:

    , . Poisson's formula.

    This formula reflects all the properties of the simplest flow, so it can be considered a mathematical model of the simplest flow.

    Example. The average number of calls received by the PBX per minute is two. Find the probability that in 5 minutes you will receive: a) two calls; b) less than two calls; c) at least two calls. The call flow is assumed to be simple.

    By condition λ=2, t=5, k=2. According to Poisson's formula

    A) - this event is practically impossible.

    B) - the event is practically impossible, because the events “no calls received” and “one call received” are incompatible.

    B) - this event is almost certain.

    Properties of dispersion.

    Property 1. The variance of the constant value C is 0.DC=0.

    Property 2. The constant factor can be taken out of the dispersion sign by squaring it:

    Property 3. Variance of the sum of two independent random variables equal to the sum of the variances of these quantities:

    Consequence. The variance of the sum of several independent random variables is equal to the sum of the variances of these variables.

    Theorem 2. The variance of the number of occurrences of event A in n independent trials, in each of which the probability p of the occurrence of the event is constant, is equal to the product of the number of trials and the probability of the occurrence and non-occurrence of the event in one trial: .

    Random variable X is the number of occurrences of event A in n independent trials. , where X i is the number of occurrences of events in the i-th trial, mutually independent, because the outcome of each trial is independent of the outcomes of the others.

    Because MX 1 =p. , That . Obviously, the variance of the remaining random variables is also equal to pq, whence .

    Example. 10 independent trials are carried out, in each of which the probability of an event occurring is 0.6. Find the variance of the random variable X - the number of occurrences of the event in these trials.

    n=10; p=0.6; q=0.4.

    o The initial moment of order to the random variables X is called the mathematical expectation of a random variable X k:

    . In particular, , .

    Using these points, the formula for calculating the variance can be written like this: .

    In addition to the moments of the random variable X, it is advisable to consider the moments of deviation X-XM.

    o Central moment of order k random variable X is called the mathematical expectation of the value (X-MX) k.

    In particular

    Hence, .

    Based on the definition of the central moment and using the properties of the mathematical expectation, we can obtain the formulas:

    Higher order moments are rarely used.

    Comment. The moments defined above are called theoretical. In contrast to theoretical moments, moments that are calculated from observational data are called empirical.

    Systems of random variables.

    o Vector, where -random variables are called n- dimensional random vector.

    Thus, the random vector maps the space of elementary outcomes Ω→IR n to the n-dimensional real space IR n.

    o Function

    Called random vector distribution function or joint distribution function random variables.

    Property 4.

    o A random vector is called discrete, if all its components are discrete random variables.

    o Random vector called continuous, if there is a non-negative function, is called the distribution density of random variables such that the distribution function .

    Correlation properties.

    Property 1. The absolute value of the correlation coefficient does not exceed unity, i.e. .

    Property 2. In order for it to be necessary and sufficient for the random variables X and Y to be related by a linear relationship. Those. with probability 1.

    Property 3. If random variables are independent, then they are uncorrelated, i.e. r=0.

    Let X and Y be independent, then by the property of mathematical expectation

    o Two random variables X and Y are called correlated, if their correlation coefficient is different from zero.

    o Random variables X and Y are called uncorrelated if their correlation coefficient is 0.

    Comment. The correlation of two random variables implies their dependence, but the dependence does not yet imply correlation. From the independence of two random variables it follows that they are uncorrelated, but from uncorrelatedness it is still impossible to conclude that these variables are independent.

    The correlation coefficient characterizes the tendency of random variables to linear dependence. The greater the absolute value of the correlation coefficient, the greater the tendency towards linear dependence.

    o Asymmetry coefficient random variable X is the number

    The sign of the asymmetry coefficient indicates right-sided or left-sided asymmetry.

    o The kurtosis of a random variable X is the number .

    Characterizes the smoothness of the distribution curve in relation to the normal distribution curve.

    Generating functions

    o Under integer By random variable we mean a discrete random variable that can take values ​​0,1,2,...

    Thus, if a random variable X is an integer, then it has a distribution series

    Its generating function is called the function

    x-squared distribution

    Let X i be normal independent random variables, and the mathematical expectation of each of them is equal to zero, and the standard deviation (or variance) is equal to one. Then the sum of the squares of these quantities is distributed according to the X 2 law with k=n degrees of freedom. If these values ​​X i are related by one linear relationship, for example, then the number of degrees of freedom k=n-1.

    The density of this distribution , Where -gamma function; in particular, Г(n+1)=n!

    This shows that the “x and square” distribution is determined by one parameter—the number of degrees of freedom k. As the number of degrees of freedom increases, the distribution slowly approaches normal.

    Student distribution

    Let a Z-normally distributed quantity, and M(Z)=0, G 2 =1, i.e. Z~N(0,1), and V is a quantity independent of Z, which is distributed according to the X 2 law with k degrees of freedom. Then the quantity has a distribution, which is called the t-distribution or the Student distribution (the pseudonym of the English statistician W. Gosset), with k degrees of freedom. As the number of degrees of freedom increases, the Student distribution quickly approaches normal.

    The distribution density of the random variable t has the form , .

    The random variable t has a mathematical expectation Mt=0, (k>2).

    Fisher distribution

    If U and V are independent random variables distributed according to the law X 2 with degrees of freedom k 1 and k 2 , then the value has a Fisher distribution F with degrees of freedom k 1 and k 2 . The density of this distribution , Where

    .

    The Fisher distribution F is determined by two parameters—the number of degrees of freedom.

    Characteristic functions

    0. 1 Random variable , where i is the imaginary unit, i.e. , and X and Y are real random variables, is called complex-valued random variable. (i 2 = –1).

    0. 2 The mathematical expectation of a complex-valued random variable Z is called . All properties of mathematical expectation remain valid for complex-valued random variables.

    0. 3 Complex-valued random variables Z 1 =X 1 +iY 1 and Z 2 =X 2 +iY 2 are called independent if they are independent, respectively.

    Laws large numbers

    Random Features

    o Random function is a function X(t), the value of which, for any value of the argument t, is a random variable.

    In other words, a random function is a function that, as a result of experiment, can take one or another specific form, although it is not known in advance which one.

    o The specific form taken by a random variable as a result of experiment is called implementation of a random function.

    Because in practice, the argument t is most often temporary, then the random function is otherwise called random process.

    The figure shows several implementations of a random process.

    If we fix the value of the argument t, then the random function X(t) will turn into a random variable, which is called cross section of a random function, corresponding to time t. We will assume that the distribution of the cross section is continuous. Then X(t) for a given t is determined by the distribution density p(x; t).

    Obviously, p(x; t) is not an exhaustive characteristic of the random function X(t), since it does not express the dependence between sections of X(t) at different times t. More full description gives the function - joint distribution density of a system of random variables , where t 1 and t 2 are arbitrary values ​​of the argument t of the random function. An even more complete characterization of the random function X(t) will be given by the compatible distribution density of a system of three random variables, etc.

    o They say that a random process has order n, if it is completely determined by the density of the compatible distribution of n arbitrary sections of the process, i.e. system of n random variables, where X(t i) is the cross-section of the process corresponding to the moment of time t i, but is not determined by specifying the joint distribution of fewer than n number of cross-sections.

    o If the density of the joint distribution of arbitrary two cross sections of a process completely determines it, then such a process is called Markovsky.

    Let there be a random function X(t). The task arises of describing it using one or more non-random characteristics. As the first of them, it is natural to take the function -mathematical expectation of a random process. The second is taken to be the standard deviation of the random process.

    These characteristics are some functions of t. The first of these is the average trajectory for all possible implementations. The second characterizes the possible spread of realizations of a random function around the average trajectory. But these characteristics are not enough. It is important to know the dependence of the quantities X(t 1) and X(t 2). This dependence can be characterized using a correlation function or a correlation moment.

    Let there be two random processes, several implementations of which are shown in the figures.

    These random processes have approximately the same mathematical expectations and averages square deviations. However, these are different processes. Any implementation for a random function X 1 (t) slowly changes its values ​​with a change in t, which cannot be said about the random function X 2 (t). For the first process, the dependence between the cross sections X 1 (t) and will be greater than the dependence for the cross sections X 2 (t) and the second process, i.e. decreases more slowly than , with increasing Δt. In the second case, the process “forgets” its past faster.

    Let us dwell on the properties of the correlation function, which follow from the properties of the correlation moment of a pair of random variables.

    Property 1. Property of symmetry.

    Property 2. If a non-random term is added to the random function X(t), then the correlation function will not change, i.e. .

    The outcome of which cannot be accurately predicted. The mathematical model must meet the requirements:

    Observed result.

    - relative frequency of experiment implementations.

    An accurate description of the nature of a random experiment entails the definition of elementary outcomes, random events and their probability, random variables, etc.


    Wikimedia Foundation. 2010.

    See what “Random experiment” is in other dictionaries:

      This term has other meanings, see Experiment (meanings). Check information. It is necessary to check the accuracy of the facts and reliability of the information presented in this article. There should be an explanation on the talk page... Wikipedia

      Erwin Schrödinger Schrödinger's cat (Schrodinger's cat) the hero of a seemingly paradoxical thought experiment by Erwin Schrödinger, with which he wanted to demonstrate incompleteness quantum mechanics during the transition from subatomic systems to macroscopic ones... Wikipedia

      Experiment- (lat. experimentum experience, evidence) 1) investigative, independent investigative action. It consists of reproducing the situation and other circumstances of a certain event and performing the necessary experimental actions in order to verify... ... Forensic Encyclopedia

      EXPERIMENT in social sciences- one of the methods of empirical research used to study causal relationships or test a hypothesis. It is the basis of so-called causal research. E.'s history begins with the works of J.S. Mill. Mill believed that... Sociology: Encyclopedia

      A given set, finite or infinite. Any random experiment can be interpreted as a random selection of an individual from an infinite G. system. In a statistical study from a statistical system characterized by a probability distribution function,... ... Geological encyclopedia

      A random event is a subset of the set of outcomes of a random experiment; When a random experiment is repeated many times, the frequency of the occurrence of an event serves as an estimate of its probability. A random event that never materializes in... ... Wikipedia

      Probability function ... Wikipedia

      Einstein Podolsky Rosen's paradox (EPR paradox) is an attempt to point out the incompleteness of quantum mechanics using a thought experiment consisting of measuring the parameters of a microobject indirectly, without affecting this... ... Wikipedia

      GOST 24026-80: Research tests. Experiment planning. Terms and Definitions- Terminology GOST 24026 80: Research tests. Experiment planning. Terms and definitions original document: 34. Adequacy of the mathematical model Adequacy of the model Correspondence of the mathematical model to experimental data... ...

      RDMU 109-77: Guidelines. Methodology for selecting and optimizing controlled parameters of technological processes- Terminology RDMU 109 77: Guidelines. Methodology for selecting and optimizing controlled parameters technological processes: 73. Adequacy of the model Compliance of the model with experimental data for the selected optimization parameter with... ... Dictionary-reference book of terms of normative and technical documentation


    Share