Probability experiment. Subject and tasks of probability theory. Random experiment

CHAPTER 1 PROBABILITY THEORY

Probability experiment. Subject and tasks of probability theory.

The results of any experiment depend to one degree or another on the set of conditions S under which the experiment is carried out. These conditions either objectively exist or are created artificially (i.e., an experiment is planned).

According to the degree of dependence of the results of an experiment on the conditions under which it was carried out, all experiments can be divided into two classes: deterministic and probabilistic.

o Deterministic experiments- These are experiments whose results can be predicted in advance on the basis of natural science laws based on a given set of conditions S.

An example of a deterministic experiment is the determination of the acceleration received by a body of mass m under the influence of a force F, i.e., the desired value is uniquely determined by a set of experimental conditions (i.e., the mass of the body m and the force F).

Deterministic are, for example, all processes based on the use of the laws of classical mechanics, according to which the movement of a body is uniquely determined by given initial conditions and forces acting on the body.

o Probabilistic experiments (stochastic or random) - experiments that can be repeated an arbitrary number of times subject to the same stable conditions, but, unlike a deterministic experiment, the outcome of a probabilistic experiment is ambiguous and random. Those. It is impossible to predict in advance the result of a probabilistic experiment based on a set of conditions S. However, if a probabilistic experiment is repeated many times under the same conditions, then the totality of the outcomes of such experiments obeys certain patterns. Probability theory is the study of these patterns (or rather, their mathematical models). Let us give several examples of probabilistic experiments, which in the future we will simply call experiments.

Example 1

Let the experiment consist of tossing a symmetrical coin once. This experiment can end in one of mutually exclusive outcomes: the coat of arms or the lattice (tails) falling out. If you know exactly the initial speeds of translational and rotational motion and the initial position of the coin at the moment of throwing, then you can predict the result of this experiment according to the laws of classical mechanics. Those. it would be deterministic. However, the initial data of the experiment cannot be fixed and are constantly changing. Therefore, they say that the result of the experiment is ambiguous, random. However, if we toss the same symmetrical coin repeatedly along a sufficiently long trajectory, i.e. if possible, if we keep certain conditions of the experiment stable, then the total number of its outcomes is subject to certain patterns: the relative frequency of the coat of arms falling out, the frequency of throws (n is the number of throws, m 1 is the number of the coat of arms falling out, m 2 is tails).

Example 2

Let's assume that we are filling out a sports lotto card. Before the winning draw, it is impossible to predict how many numbers will be guessed correctly. However, the experience of conducting sports lotto draws suggests that the average percentage of players who guessed m (1≤m≤6) numbers fluctuates around a certain constant value. These “patterns” (the average percentage of correctly guessing a given number of numbers) are used to calculate winning funds.

Probabilistic experiments have the following common features: unpredictability of the result; the presence of certain quantitative patterns when they are repeated many times under the same conditions; many possible outcomes.

o The subject of probability theory is a quantitative and qualitative analysis of mathematical models of probabilistic experiments, called static processing of experimental data.

o Probability theory- the science that deals with the analysis of mathematical models for decision making under conditions of uncertainty.

Events and operations on them.

Relative frequencies and their properties

The primary concept of probability theory, not defined through other concepts, is the space of elementary outcomes Ω. Usually, the only possible indecomposable results of an experiment are taken as the space of elementary outcomes.

Example

1. Suppose that a symmetrical coin is tossed. Then (coat of arms and tails).

2. Dice .

3. Two coins are tossed.

4. Two dice are thrown. The number of elementary outcomes is 36.

5. A point is thrown at random on the number axis w.

6. Two points are thrown on.

y

Definition. Event is an arbitrary subset A of the space of elementary outcomes Ω. Those elementary outcomes that make up event A are called favorable event A.

An event A is said to have occurred if, as a result of the experiment, an elementary outcome w A occurs, i.e. favorable event A.

Let's look at example 2. , – an event consisting of an odd number of points; – an event consisting of an even number of points being rolled.

o The entire space of elementary outcomes Ω, if taken as an event, is called reliable event, since it occurs in any experiment (always).

o An empty set (i.e. a set that does not contain a single elementary outcome) is called impossible an event because it never happens.

All other events, except Ω and , are called random.

Operations on events

0.1 Amount events A and B is called the union of these sets A B.

– an event that occurs if and only if at least one of events A or B occurs.

0.2 The work events A and B is called the intersection of the sets A and B, i.e. A B. Designated as AB.

AB is an event when A and B occur simultaneously.

0.3 By difference events A and B is called the difference of the sets A\B.

A\B is an event that occurs<=>when A happens and B doesn't happen.

o Events A and B are called incompatible, If . If A and B are incompatible, then we will denote .

o Event A is said to entail event B if A is a subset of B, i.e. (when A happens, B happens).

o The event is called opposite to event A.

Example 2. . occurs when A does not occur.

o They say that the events Н 1 , Н 2 ,…, Н n form a complete group, if Н 1 +Н 2 +…+Н n =Ω (i.e. Н 1 , Н 2 , Н n are incompatible, i.e. Н i Н j = if i≠j).

For example, A and form a complete group: .

Let us assume that some random experiment is carried out, the result of which is described by the space Ω. Let's perform N experiments. Let A be some event (), N(A) be the number of those experiments in which event A occurred.

Then the number called relative frequency of event A.

Axioms of probability theory

Let Ω be the space of elementary outcomes. Suppose that F is some class of subsets of Ω.

o An event is a subset of Ω belonging to the class F. Each event is associated with a real number P(A), called probability A , so the axioms are satisfied:

Axiom 1.

Axiom 2.,those. the probability of a certain event is 1.

Axiom 3.(countable additivity) If and , then (for incompatible events).

Elements of combinatorics

Lemma 1. From m elements a 1 ,…,a m of the first group and n elements b 1 ,…,b n of the second group, it is possible to compose exactly m∙n ordered pairs of the form (a i , b j ), containing one element from each group.

Proof:

In total we have m∙n pairs.

Example. There are 4 suits in the deck (hearts, spades, clubs, diamonds), each suit has 9 cards. Total n=4∙9=36.

Lemma 2. From n 1 elements of the first group a 1, a 2,…, and n 1,

n 2 elements of the second group b 1, b 2,…, b n 2,

n 3 elements of the k-th group x 1 , x 2 ,…, x nk

it is possible to compose exactly n 1 ∙ n 2 ∙…∙n k different ordered combinations of the form , containing one element from each group.

1. For k=2, the statement is true (Lemma 1).

2. Suppose that Lemma 2 holds for k. Let us prove for k+1 group of elements. Consider the combination as well as . The assumption makes it possible to calculate the number of combinations of k elements, their n 1 n 2 n k . According to Lemma 1, the number of combinations of k+1 elements is n 1 n 2 … n k +1.

Example. When throwing two dice N=6∙6=36. When throwing three dice N=6∙6∙6=216.

Geometric probabilities

Suppose that there is a certain segment on the number line and a point is thrown at random on this segment. Find the probability that this point will fall on .

-geometric probability on a straight line.

Let a plane figure g be part of a plane figure G. A point is thrown at random onto the figure G. The probability of a point falling into figure g is determined by the equality:

-geometric probability on the plane.

Let there be a figure v in space that is part of the figure V. A point is thrown at random onto the figure V. The probability of a point falling into figure v is determined by the equality:

-geometric probability in space.

The disadvantage of the classical definition of probability is that it does not apply to trials with an infinite number of outcomes. To eliminate this drawback, they introduce geometric probabilities.

Properties of Probability

Property 1. The probability of an impossible event is 0, i.e. . .

Property 2. The probability of a reliable event is equal to 1, i.e. , .

Property 3. For any event. , because , then and therefore .

Property 4. If events A and B are incompatible, then the probability of the sum is equal to the sum of the probabilities:

Random variables

o Random variable X is a function X(w) that maps the space of elementary outcomes Ω in the set real numbers R.

Example. Let a coin be tossed twice. Then .

Let us consider the random variable X—the number of occurrences of the coat of arms on the space of elementary outcomes Ω. The set of possible values ​​of a random variable is: 2,1,0.

w (g,g) (r,r) (p,g) (p,p)
X(w)

The set of values ​​of a random variable is denoted by Ω x. One of the important characteristics of a random variable is the distribution function of the random variable.

o Distribution function of random variable X is called a function F(x) of a real variable x, which determines the probability that the random variable X will take, as a result of an experiment, a value less than a certain fixed number x.

If we consider X as a random point on the x axis, then F(x) with geometric point In terms of perspective, this is the probability that a random point X as a result of the experiment will fall to the left of point x.

The simplest flow of events.

Let's consider events that occur at random times.

o The flow of events call a sequence of events that occur at random times.

Examples of flows are: calls arriving at a telephone exchange, at an emergency room medical care, arrival of planes at the airport, clients at the enterprise consumer services, sequence of failures of elements and many others.

Among the properties that flows can have, we highlight the properties of stationarity, absence of consequences and ordinaryness.

o The flow of events is called stationary, if the probability of occurrence of k events during a time period of duration t depends only on k and t.

Thus, the property of stationarity is characterized by the fact that the probability of occurrence of k events at any time interval depends only on the number k and on the duration t of the interval and does not depend on the beginning of its counting; in this case, different time intervals are assumed to be disjoint. For example, the probabilities of occurrence of k events on time intervals (1, 7), (10, 16), (T, T+6) of the same duration t=6 time units are equal to each other.

o The flow of events is called ordinary, if no more than one event can occur in an infinitely small period of time.

Thus, the property of ordinaryness is characterized by the fact that the occurrence of two or more events in a short period of time is practically impossible. In other words, the probability of more than one event occurring at the same time is practically zero.

o The stream of events is said to have the property no consequences, if there is mutual independence of the occurrences of one or another number of events in non-overlapping time intervals. Thus, the property of no consequences is characterized by the fact that the probability of the occurrence of k events at any time interval does not depend on whether events appeared or did not appear at points in time preceding the beginning of the period under consideration. In other words, the conditional probability of the occurrence of k events over any period of time, calculated under an arbitrary assumption about what happened before the beginning of the period in question (i.e., how many events appeared, in what sequence), is equal to the unconditional probability. Consequently, the flow's history does not affect the probability of events occurring in the near future.

o The flow of events is called simplest or Poisson, if it is stationary, ordinary, without consequences.

o Flow intensity λ is the average number of events that occur per unit time.

If the constant intensity of the flow is known, then the probability of occurrence of k events of the simplest flow during a time period of duration t is determined by the formula:

, . Poisson's formula.

This formula reflects all the properties of the simplest flow, so it can be considered a mathematical model of the simplest flow.

Example. The average number of calls received by the PBX per minute is two. Find the probability that in 5 minutes you will receive: a) two calls; b) less than two calls; c) at least two calls. The call flow is assumed to be simple.

By condition λ=2, t=5, k=2. According to Poisson's formula

A) - this event is practically impossible.

B) - the event is practically impossible, because the events “no calls received” and “one call received” are incompatible.

B) - this event is almost certain.

Properties of dispersion.

Property 1. The variance of the constant value C is 0.DC=0.

Property 2. The constant factor can be taken out of the dispersion sign by squaring it:

Property 3. Variance of the sum of two independent random variables equal to the sum of the variances of these quantities:

Consequence. The variance of the sum of several independent random variables is equal to the sum of the variances of these variables.

Theorem 2. The variance of the number of occurrences of event A in n independent trials, in each of which the probability p of the occurrence of the event is constant, is equal to the product of the number of trials and the probability of the occurrence and non-occurrence of the event in one trial: .

Random variable X is the number of occurrences of event A in n independent trials. , where X i is the number of occurrences of events in the i-th trial, mutually independent, because the outcome of each trial is independent of the outcomes of the others.

Because MX 1 =p. , That . Obviously, the variance of the remaining random variables is also equal to pq, whence .

Example. 10 independent trials are carried out, in each of which the probability of an event occurring is 0.6. Find the variance of the random variable X - the number of occurrences of the event in these trials.

n=10; p=0.6; q=0.4.

o The initial moment of order to the random variables X is called the mathematical expectation of a random variable X k:

. In particular, , .

Using these points, the formula for calculating the variance can be written like this: .

In addition to the moments of the random variable X, it is advisable to consider the moments of deviation X-XM.

o Central moment of order k random variable X is called the mathematical expectation of the value (X-MX) k.

. In particular

Hence, .

Based on the definition of the central moment and using the properties of the mathematical expectation, we can obtain the formulas:

Higher order moments are rarely used.

Comment. The moments defined above are called theoretical. In contrast to theoretical moments, moments that are calculated from observational data are called empirical.

Systems of random variables.

o Vector, where -random variables are called n- dimensional random vector.

Thus, the random vector maps the space of elementary outcomes Ω→IR n to the n-dimensional real space IR n.

o Function

Called random vector distribution function or joint distribution function random variables.

Property 4.

o A random vector is called discrete, if all its components are discrete random variables.

o Random vector called continuous, if there is a non-negative function, is called the distribution density of random variables such that the distribution function .

Correlation properties.

Property 1. The absolute value of the correlation coefficient does not exceed unity, i.e. .

Property 2. In order for it to be necessary and sufficient for the random variables X and Y to be related by a linear relationship. Those. with probability 1.

Property 3. If random variables are independent, then they are uncorrelated, i.e. r=0.

Let X and Y be independent, then by the property of mathematical expectation

o Two random variables X and Y are called correlated, if their correlation coefficient is different from zero.

o Random variables X and Y are called uncorrelated if their correlation coefficient is 0.

Comment. The correlation of two random variables implies their dependence, but the dependence does not yet imply correlation. From the independence of two random variables it follows that they are uncorrelated, but from uncorrelatedness it is still impossible to conclude that these variables are independent.

The correlation coefficient characterizes the tendency of random variables to linear dependence. The greater the absolute value of the correlation coefficient, the greater the tendency towards linear dependence.

o Asymmetry coefficient random variable X is the number

The sign of the asymmetry coefficient indicates right-sided or left-sided asymmetry.

o The kurtosis of a random variable X is a number.

Characterizes the smoothness of the distribution curve in relation to the normal distribution curve.

Generating functions

o Under integer By random variable we mean a discrete random variable that can take values ​​0,1,2,...

Thus, if a random variable X is an integer, then it has a distribution series

Its generating function is called the function

x-squared distribution

Let X i be normal independent random variables, and the mathematical expectation of each of them is equal to zero, and the standard deviation (or variance) is equal to one. Then the sum of the squares of these quantities distributed according to the X 2 law with k=n degrees of freedom. If these values ​​X i are related by one linear relationship, for example, then the number of degrees of freedom k=n-1.

The density of this distribution is , where -gamma function; in particular, Г(n+1)=n!

This shows that the “x and square” distribution is determined by one parameter—the number of degrees of freedom k. As the number of degrees of freedom increases, the distribution slowly approaches normal.

Student distribution

Let a Z-normally distributed quantity, and M(Z)=0, G 2 =1, i.e. Z~N(0,1), and V is a quantity independent of Z, which is distributed according to the X 2 law with k degrees of freedom. Then the quantity has a distribution, which is called the t-distribution or the Student distribution (the pseudonym of the English statistician W. Gosset), with k degrees of freedom. As the number of degrees of freedom increases, the Student distribution quickly approaches normal.

The distribution density of the random variable t has the form , .

The random variable t has a mathematical expectation Mt=0, (k>2).

Fisher distribution

If U and V are independent random variables distributed according to the law X 2 with degrees of freedom k 1 and k 2 , then the value has a Fisher distribution F with degrees of freedom k 1 and k 2 . The density of this distribution , Where

.

The Fisher distribution F is determined by two parameters—the number of degrees of freedom.

Characteristic functions

0. 1 Random variable , where i is the imaginary unit, i.e. , and X and Y are real random variables, is called complex-valued random variable. (i 2 = –1).

0. 2 The mathematical expectation of a complex-valued random variable Z is called . All properties of mathematical expectation remain valid for complex-valued random variables.

0. 3 Complex-valued random variables Z 1 =X 1 +iY 1 and Z 2 =X 2 +iY 2 are called independent if they are independent, respectively.

Laws large numbers

Random Features

o Random function is a function X(t), the value of which, for any value of the argument t, is a random variable.

In other words, a random function is a function that, as a result of experiment, can take one or another specific form, although it is not known in advance which one.

o The specific form taken by a random variable as a result of experiment is called implementation of a random function.

Because in practice, the argument t is most often temporary, then the random function is otherwise called random process.

The figure shows several implementations of a random process.

If we fix the value of the argument t, then the random function X(t) will turn into a random variable, which is called cross section of a random function, corresponding to time t. We will assume that the distribution of the cross section is continuous. Then X(t) for a given t is determined by the distribution density p(x; t).

Obviously, p(x; t) is not an exhaustive characteristic of the random function X(t), since it does not express the dependence between sections of X(t) at different times t. More full description gives the function - joint distribution density of a system of random variables , where t 1 and t 2 are arbitrary values ​​of the argument t of the random function. An even more complete characterization of the random function X(t) will be given by the compatible distribution density of a system of three random variables, etc.

o They say that a random process has order n, if it is completely determined by the density of the compatible distribution of n arbitrary sections of the process, i.e. system of n random variables, where X(t i) is the cross-section of the process corresponding to the moment of time t i, but is not determined by specifying the joint distribution of fewer than n number of cross-sections.

o If the density of the joint distribution of arbitrary two cross sections of a process completely determines it, then such a process is called Markovsky.

Let there be a random function X(t). The task arises of describing it using one or more non-random characteristics. As the first of them, it is natural to take the function -mathematical expectation of a random process. The second is taken to be the standard deviation of the random process.

These characteristics are some functions of t. The first of these is the average trajectory for all possible implementations. The second characterizes the possible spread of realizations of a random function around the average trajectory. But these characteristics are not enough. It is important to know the dependence of the quantities X(t 1) and X(t 2). This dependence can be characterized using a correlation function or a correlation moment.

Let there be two random processes, several implementations of which are shown in the figures.

These random processes have approximately the same mathematical expectations and averages square deviations. However, these are different processes. Any implementation for a random function X 1 (t) slowly changes its values ​​with a change in t, which cannot be said about the random function X 2 (t). For the first process, the dependence between the cross sections X 1 (t) and will be greater than the dependence for the cross sections X 2 (t) and the second process, i.e. decreases more slowly than , with increasing Δt. In the second case, the process “forgets” its past faster.

Let us dwell on the properties of the correlation function, which follow from the properties of the correlation moment of a pair of random variables.

Property 1. Property of symmetry.

Property 2. If a non-random term is added to the random function X(t), then the correlation function will not change, i.e. .

In science and practice, there are three ways to test hypotheses. First consists in directly (directly) establishing the hypothesis put forward. This method in forensic practice can be applied to a relatively small group of predictive versions (investigative and search). Second path...
(Forensics)
  • Probability distributions and expected returns
    As has been said more than once, risk is associated with the likelihood that the actual return will be lower than the expected value. Therefore, probability distributions are the basis for measuring the risk of an operation. However, it should be remembered that the estimates obtained are probabilistic in nature. Example...
    (Methods of making management decisions)
  • Qualitative and quantitative models for assessing the probability of bankruptcy
    Default risk, or credit risk, is the risk of failure to comply with the terms of a credit agreement or market transaction, primarily expressed in the borrower's failure to timely and in full fulfill the debt obligations assumed (for example, pay the agreed upon date...
    (The financial analysis for managers)
  • Wigner distribution on phase space and negative probability
    Even in non-relativistic quantum mechanics negative probabilities arise. Here it is impossible to introduce the probability distribution (Maxwell) of coordinates x and moments p, just as in statistical mechanics. This is impossible due to the uncertainty relationship, which prevents simultaneous measurement...
  • p-adic probability space
    Let R : A Qp - measure defined on a separable algebra A. subsets of the set 12, which satisfies the normalization condition /i(12) = 1. Let us set T = Afl and denote the continuation of the measure R for algebra F symbol R. Troika (12, J-. P) is called p-adic...
    (QUANTUM PHYSICS AND NON-KOLMOGOR PROBABILITY THEORIES)
  • REGRESSION. MATHEMATICAL PROCESSING OF EXPERIMENTAL RESULTS
    Statement of the problem of compiling empirical formulas Let's consider a problem similar to that given in section 4.1. Let us now conduct a study of the relationship between the number of visitors and sales volume in a supermarket for 10 days. In this case, a certain set of pairs of values ​​is obtained X- numbers...
    (NUMERICAL METHODS)
  • Mathematical expectation of a random function
    Consider the random function X(i). For a fixed argument value, for example when t = tv we get the cross section - a random variable X(t() with mathematical expectation M.(We assume that the mathematical expectation of any section exists.) Thus, every fixed...
    (THEORY OF PROBABILITY AND MATHEMATICAL STATISTICS)
  • Definition 1. Random experiment is a clearly described sequence of actions that can be repeated as many times as desired, but the outcome of which cannot be predicted with certainty. The inability to accurately guess the outcome of an experiment is due to big amount factors beyond our control. All experimental outcomes are indicated by the letter .

    Definition 2. Random event is any subset of all possible outcomes of a random experiment.

    Example(random experiment):

    1. Look at the screen of the stock exchange terminal to find out the latest quote of a liquid share, for example, shares of RAO UES as the outcome of the experiment.
    2. Throw the dice and look at the outcome of the experiment - the number of points rolled.

    Example(random event):

    1. Random event A = - see, by looking at the screen of the stock monitor, the quote of the RAO UES share in this range.
    2. Random event B = (2, 3) - see one of these numbers by looking at the fallen die.

    The original numbering of the FCSM problem book provided by the Exchange School has been preserved. It should not be given any significance; it is retained for the convenience of persons preparing to take the securities examination.

    1.4.1.11 In probability theory, a random event is understood as a certain fact, which is characterized by the following features:
    I Observed once
    II Can be observed repeatedly
    III It is impossible to say with complete certainty whether it will happen again or not
    IV Subject to control of the experimental conditions, it can be stated with complete certainty whether it will happen or not

    A) Only I and IV are correct
    *B) Only II and III are correct
    C) Only II, III or IV is correct
    D) Only III is correct

    Solution. From definitions 1, 2 it is obvious that only II and III are true statements, i.e. the correct answer is B.

    Definition 3. All outcomes of the experiment are a certain set of points of arbitrary nature, called reliable event , because When conducting a random experiment, some outcome of the experiment is bound to occur.

    Definition 4. Impossible event - this is one in which there is no outcome of the experiment, and which, therefore, cannot appear during the experiment.

    For educational purposes, we depict a reliable event in a circle.

    Definition 5. Then the random event A is some of its subdomain, and additional event (or negative to A ) to event A is called set of “not A” – these are all points from from that are not included in A (i.e. A and “not A” do not intersect, but together make up everything).

    Definition 6."Sum" or "Union" or event “A or B” – that set that includes all the points of both sets and only them

    Definition 7."Work" or "intersection" or event “A and B” - that set that includes only those points that are included in both set A and set B. If such common points are absent, that is, the product of events A and B is an impossible event, then events A and B are called incompatible.

    Comment. In particular, it is clear that the product of events A and “not A” is an impossible event, because these sets, by definition, have no common points.

    1.4.1.15.1 What will be the product of a random event and an additional event to this event?

    A) A reliable event
    *B) Impossible event
    B) The event itself

    Solution. From the remark to Definition 7 it follows that the correct answer is B.

    1.4.1.15.2 What will be the sum of a random event and an additional event to this event?

    *A) A reliable event
    B) An impossible event
    B) Additional event

    Solution. From Definition 5 it follows that the correct answer is A.

    1.4.1.13.1 If event A is that the company’s stock price at tomorrow’s trading will not be lower than 25 rubles, and event B is that the relative change in the stock price will not exceed 3% compared to the stock price of the previous day, then what will be the an event equal to the product of events A and B?

    A) The company's stock price at tomorrow's trading will not be lower than 25 rubles. OR the relative change in the stock price will not exceed 3% compared to the previous day's stock price
    *B) The company's stock price at tomorrow's trading will not be lower than 25 rubles. And the relative change in the stock price will not exceed 3% compared to the stock price of the previous day

    Solution. From Definition 7 it follows that saying “the product of events A and B” is the same as saying “event A and B,” i.e. the correct answer is B.

    1.4.1.13.2 If event A is that the company’s stock price at tomorrow’s trading will not be lower than 25 rubles, and event B is that the relative change in the stock price will not exceed 3% compared to the stock price of the previous day, then what will be the event, equal to the sum events A and B?

    *A) The company's stock price at tomorrow's trading will not be lower than 25 rubles. OR the relative change in the stock price will not exceed 3% compared to the previous day's stock price
    B) The company's stock price at tomorrow's trading will not be lower than 25 rubles. And the relative change in the stock price will not exceed 3% compared to the stock price of the previous day

    Solution. From Definition 6 it follows that saying “the sum of events A and B” is the same as saying “event A or B,” i.e. The correct answer is A.

    1.4.1.13.3 Random event A is that the company's stock price at tomorrow's trading will not be lower than 25 rubles. From the following, indicate random events additional to random event A
    I The company's stock price at tomorrow's trading will be 26 rubles.
    II The company's stock price at tomorrow's trading will not exceed 26 rubles.
    III The company's stock price at tomorrow's trading will exceed 26 rubles.

    A) Only I
    B) II only
    B) Only I and III
    *D) None of the above

    Solution. It is often easier to write the correct answer yourself and then see under which letter the correct answer is given.

    In the symbols of school mathematics, our event A = (the stock price at tomorrow's auction will not be lower than 25 rubles) =

  • D = (26, +¥)
  • It is clear that none of these events B, C, D coincides with the event “ not A” = }

    Share