Fourier transform Fourier integral complex form of the integral Fourier transform cosine and sine transforms amplitude and phase spectra application properties. Fourier transform. Fourier transform properties

1. Linearity. The Fourier transform is one of the linear integral operations, i.e. signal sum spectrum equal to the sum spectra of these signals.

a n s n (t) ? a n S n (n)

2. Parity properties

Transformations are determined by the cosine (even, real) and sine (odd, imaginary) parts of the expansion and the similarity of direct and inverse transformations.


  • 3. Changing the argument of a function (compression or expansion of the signal) leads to an inverse change in the argument of its Fourier transform and an inversely proportional change in its module.
  • 4. Delay theorem. The delay (shift, shift) of the signal in the argument of the function by the interval t o leads to a change in the phase-frequency function of the spectrum (phase angle of all harmonics) by an amount - št o without changing the modulus (amplitude function) of the spectrum.

5. Derivative transformation (signal differentiation):

s(t) = d/dt = d/dt =Y(уж) dш= = jш Y(уж) exp(jшt) dш jш Y(уж).

Thus, signal differentiation is displayed in the spectral domain by simply multiplying the signal spectrum by signal differentiation operator in the frequency domain jш, which is equivalent to differentiating each harmonic of the spectrum. Multiplication by jн leads to the enrichment of the spectrum of the derivative signal with high-frequency components (compared to the original signal) and destroys components with zero frequency.


6. Transformation of the integral signal in the frequency domain with a known signal spectrum can be obtained from the following simple considerations. If s(t) = d/dt jшY(у) = S(у), then the inverse operation must also be performed: y(t) = s(t) dt Y(у) = S(у)/jш. This implies:

s(t)dt ? (1/j w)S(w).

Integration operator in the frequency domain (1/j w) with w >1 weakens high frequencies in the amplitude spectrum and with w<1 усиливает низкие. Фазовый спектр сигнала смещается на -90 0 для положительных частот и на 90 0 для отрицательных.


7. Signal convolution transformation y(t) = s(t) * h(t):

Y(у) =y(t) exp(-jшt) dt =s(ф) h(t- ф) exp(-jшt) dфdt

Y(φ) =s(φ) dφ h(t-φ) exp(-jφt) dt.

According to the delay theorem:

h(t- ph) exp(-jscht) dt = H(t- ph) exp(-jscht).

Y(sq) =H(s) s(f) exp(-js f) df= H(s)·S(s).

s(t) * h(t)?S(w)H(w).


Thus, the convolution of functions in coordinate form is displayed in frequency representation by the product of the Fourier images of these functions.

8. Transformation of the product of signals y(t) = s(t) h(t):

Y(?) =s(t) h(t) exp(-j?t) dt =s(t) [(1/2?)H(?") exp(j?"t) d?"] dt = (1/2?)s(t)H(?") exp(-j(?-?")t) d?"dt = (1/2?)H(?") d?"s(t ) exp(-j(?-?")t) dt = (1/2?)H(?") S(?-?") d?" = (1/2?) H(?) * S(?).

The product of functions in coordinate form is displayed in frequency representation by convolution of the Fourier images of these functions.

9. Multiplying the signal by the harmonic function fills the signal with the harmonic frequency and generates a radio pulse.


10. Power spectra. If the function s(t) has a Fourier transform S(?), then the power spectral density of this function is determined by the expressions:

w(t) = s(t) s*(t) = |s(t)| 2 |S(?)| 2 = S(?) S*(?) = W(?).

The power spectrum is a real non-negative even function, which is very often called the energy spectrum. The power spectrum, as the square of the modulus of the signal spectrum, does not contain phase information about the frequency components, and, therefore, reconstruction of the signal from the power spectrum is impossible. This also means that signals with different phase characteristics can have the same power spectra. In particular, the signal shift does not affect its power spectrum. mathematical method Fourier transform

11. Parseval's equality. Total signal spectrum energy:

E s =W(f)df=|S(f)| 2 df.

Since the coordinate and frequency representations are essentially just different mathematical representations of the same signal, the energy of the signal in the two representations must also be equal, which implies Parseval’s equality:

|s(t)| 2 dt =|S(f)| 2 df,

those. the energy of the signal is equal to the integral of the modulus of its frequency spectrum - the sum of the energies of all frequency components of the signal.

Having learned how to calculate the spectral densities of fairly simple but frequently encountered pulse signals, let us move on to a systematic study of the properties of the Fourier transform.

Linearity of the Fourier transform.

This most important property is formulated as follows: if there is a certain set of signals, then the weighted sum of the signals is Fourier transformed as follows:

Here are arbitrary numerical coefficients.

To prove formula (2.26), one should substitute the sum of the signals into the Fourier transform (2.16).

Properties of the real and imaginary parts of the spectral density.

Let be a signal taking real values. Its spectral density in the general case is complex:

We substitute this expression into the inverse Fourier transform formula (2.18):

In order for the signal obtained by such a double transformation to remain real, it is necessary to require that

This is possible only if the real part of the signal spectral density is even, and the imaginary part is an odd function of frequency:

Spectral density of a time-shifted signal.

Let's assume that the correspondence is known for the signal. Let's consider the same signal, but occurring seconds later. Taking the point as the new origin of time, we denote this displaced signal as Let us show that

The proof is very simple. Really,

The modulus of a complex number is equal to one for any value, therefore the amplitudes of the elementary harmonic components that make up the signal do not depend on its position on the time axis. Information about this characteristic of the signal is contained in the frequency dependence of the argument of its spectral density (phase spectrum).

Dependence of the spectral density of the signal on the choice of time measurement scale.

Let's assume that the original signal is subject to a time scale change. This means that the role of time t is played by a new independent variable (k is some real number). If this occurs, “compression” of the original signal occurs; if the signal is “stretched” in time.

It turns out that if then

Really,

whence formula (2.29) follows.

So, in order, for example, to compress a signal in time while maintaining its shape, it is necessary to distribute the same spectral components over a wider frequency range with a corresponding proportional decrease in their amplitudes.

The following problem is closely related to the issue considered here.

Given a pulse that is different from zero on a segment and characterized by spectral density. It is required to find the spectral density of a “time-reversed” signal, which is a “mirror copy” of the original pulse oscillation. Because it's obvious that

After performing a change of variable, we find that

Spectral density of the derivative and indefinite integral.

Let the signal s(t) and its spectral density be given. We will study the new signal and set a goal to find its spectral density - .

A-priory,

The Fourier transform is a linear operation, which means equality (2.31) is also true with respect to spectral densities. Taking (2.28) into account, we obtain

Representing the exponential function by a Taylor series: substituting this series in (2.32) and limiting ourselves to the first two terms, we find

With differentiation, the rate of change of the signal over time increases. As a consequence, the modulus of the spectrum of the derivative has larger values ​​in the high-frequency region compared to the modulus of the spectrum of the original signal.

Formula (2.33) is generalized to the case of the order derivative spectrum. It is easy to prove that if , then

So, differentiating a signal with respect to time is equivalent to a simple algebraic operation of multiplying the spectral density by a factor. Therefore, it is customary to say that an imaginary number is a differentiation operator operating in the frequency domain.

The considered function is an antiderivative (indefinite integral) with respect to the function. From (2.33) it formally follows that the spectrum of the antiderivative

Thus, the multiplier serves as an integration operator in the frequency domain.

Spectral density of the signal at the integrator output.

In many radio engineering devices, so-called integrators are used - physical systems whose output signal is proportional to the integral of the input action. Let us specifically consider an integrator that converts the input signal into an output signal according to the following law:

Here is a fixed parameter.

The definite integral included in (2.36) is obviously equal to the difference between two values ​​of the antiderivative of the signal, one of which is calculated with the argument t, and the other with the argument . Using relations (2.28) and (2.35), we obtain the formula for the relationship between the spectral densities of the signals at the input and output:

The factor in brackets is limited at any frequency, while the magnitude of the denominator increases linearly with increasing frequency. This indicates that the integrator in question acts like a low-pass filter, attenuating the high-frequency spectral components of the input signal.

As follows from the theory of the Fourier series, it is applicable when dealing with periodic functions and with functions with a limited interval of variation of independent variables (since this interval can be extended to the entire axis by periodically extending the function). However, periodic functions are relatively rare in practice. This situation requires the creation of a more general mathematical apparatus for handling non-periodic functions, namely the Fourier integral and, on its basis, the Fourier transform.

Let us consider the non-periodic function f(t) as the limit of a periodic one with period T=2l for l®?.

A periodic function with a period of 2l can be represented as a Fourier series expansion (we will use its complex form)

where the expressions for the coefficients have the form:

Let us introduce the following notation for frequencies:

Let us write the expansion in the Fourier series in the form of one formula, substituting in (1) the expression for the coefficients (2) and for the frequency (3):

Discrete spectrum of a periodic function with period 2l

Let us denote the minimum distance between the points of the spectrum, equal to the fundamental frequency of oscillations for, i.e.

and introduce this notation in (4):

In this notation, the Fourier series resembles the integral sum for a function.

Going to the limit at T=2l®? to a non-periodic function, we find that the frequency interval becomes infinitesimal (we denote it as dw), and the spectrum becomes continuous. From a mathematical point of view, this corresponds to replacing summation over a discrete set with integration over the corresponding variable over infinite limits.

This expression is the Fourier integral formula.

2.2 Fourier transform formulas.

It is convenient to represent the Fourier integral as a superposition of two formulas:

The function F(w), comparable according to the first formula of the function f(t), is called its Fourier transform. In turn, the second formula, which allows you to find the original function from its image, is called inverse Fourier transform. Let us pay attention to the symmetry of the formulas for the direct and inverse Fourier transforms up to an accuracy of a constant factor of 1/2p and the sign in the exponent.

Symbolically, the direct and inverse Fourier transforms will be denoted as f(t)~F(w).

Drawing an analogy with the trigonometric Fourier series, we can come to the conclusion that the Fourier image (6) is an analogue of the Fourier coefficient (see (2)), and the inverse Fourier transform (7) is an analogue of the expansion of a function into a trigonometric Fourier series (see (1) )).

Note that the multiplier, instead of the inverse transformation, can be attributed to the direct Fourier transform or make symmetrical factors for the direct and inverse transformations. The main thing is that both transformations together form the Fourier integral formula (5), i.e. the product of constant factors during direct and inverse transformation must be equal..

Note that for applied purposes, it is not the angular frequency w that is more convenient, but the frequency n associated with the first by the relation w = 2pn. and measured in Hertz (Hz). In terms of this frequency, the Fourier transform formulas will look like:

Let us formulate without proof sufficient conditions for the existence of the Fourier transform.

  • 1) f(t) - limited at t?(-?,?);
  • 2) f(t) - absolutely integrable on t?(-?,?);
  • 3) The number of discontinuity points, maximum and minimum of the function f(t) is finite.

Another sufficient condition is the requirement that the function be quadratically integrable on its real axis, which physically corresponds to the requirement of finite signal power.

Thus, using the Fourier transform, we have two ways to represent the signal: time f(t) and frequency F(w).

  • 2.3 Properties of the Fourier transform.
  • 1. Linearity.

If f(t)~F(w),g(t)~G(w),

then аf(t)+bg(t) ~aF(w)+bG(w).

The proof is based on the linear properties of integrals.

  • 2. Parity.
  • 2.1 If f(t) is a real even function and f(t)~F(w), then F(w) is also a real even function.

Proof:

Using definition (6), as well as Euler’s formula, we obtain

  • -even function.
  • 2.2 If f(t) is an odd real function, then F(w) is an odd imaginary function.

2.3 If f(t) is an arbitrary real function, F(w) has an even real part and an odd imaginary part.

Proof:


The properties of parity 2 can be summarized in the formula:

3. Similarity

If f(t)~F(w), then f(at)~.

  • 4. Bias.
  • 4.1 If f(t)~F(w), then f(t-a)~.

Those. time delay corresponds to multiplication by a complex exponential in the frequency domain.

4.2 If f(t)~F(w), then~.

Those. the frequency shift corresponds to multiplication by a complex exponential in the time domain.

  • 5. If f(t)~F(w), then
  • 5.1 f’(t)~iwF(w),~

if f(t) has n continuous derivatives.

Proof:

if F(w) has n continuous derivatives.

Proof:

  • 2.4 The most important examples of finding the Fourier transform.

where is the rectangular impulse

At the same time, we took into account that is the Poisson integral.

Finding the last integral can be explained as follows. The integration contour C is a straight line in the complex plane (t,w), parallel to the real axis (w is a constant number). The integral of a scalar function over a closed loop is zero. We form a closed loop consisting of a straight line C and a real axis t, closing at infinity. Because at infinity the integrand function tends to zero, then the integrals along the closing curves are equal to zero. This means that the integral along the straight line C is equal to the integral taken along the real real axis passing in the positive direction.

2 .5 The uncertainty principle for the time-frequency representation of a signal.

Using the example of a rectangular pulse, we will show the validity uncertainty principle consisting in the fact that it is impossible to simultaneously localize a pulse in time and enhance its frequency selectivity.

According to 5), the width of the rectangular pulse in the time domain DT is equal to 2T. We take the distance between adjacent zeros of the central hump in the frequency domain as the width of the Fourier image of a rectangular pulse. The first zeros of the function are at.

Thus we get

Thus, the more a pulse is localized in time, the more its spectrum is smeared. Conversely, to reduce the spectrum, we are forced to stretch the pulse in time. This principle is valid for any form of impulse and is universal.

2.6 Convolution and its properties.

Convolution is the main procedure when filtering a signal.

Let us call a function h(t) the convolution of non-periodic functions f(t) and h(t) if it is defined as the following integral:

We will symbolically denote this fact as.

The convolution operation has the following properties.

  • 1. Commutativity.

The proof of commutativity can be obtained by changing the variable t-t=t’

  • 2. Associativity

Proof:

  • 3. Distributivity

The proof of this property follows directly from the linear properties of integrals.

For signal processing, the most important thing in the Fourier method (after the Fourier transform formulas) are the convolution theorems. We will use frequency n instead of w, because convolution theorems in this representation will be mutually invertible.

2.7 Convolution theorems

First convolution theorem.

The Fourier transform of a direct product of functions is equal to the convolution of the transformations

Proof:

Let it be then. Using the definition of the inverse Fourier transform and changing the order of integration, we obtain:

In terms of angular frequency w, this theorem has a less universal form

Second convolution theorem.

The Fourier transform of the convolution of functions is equal to the direct product of the transformations.

Proof:


For example, consider the convolution of a rectangular pulse

By condition f(t)=0 at t<-T и приt>T. Similarly, f(t-t)=0 for

t-t<-T и при t-t>T, i.e. att>t+T and att

at -2T

Combining both cases, we get the expression for convolution:

Thus, the convolution of a rectangular pulse with itself will be a triangular pulse (sometimes this function is called the L-function).

Using the convolution theorem, we can easily obtain the Fourier transform of the L-function

In practice, physical situations correspond to functions equal to zero at t<0. Это приводит к тому, что бесконечные пределы заменяются конечными.

Find the convolution of the functions f(t) and g(t)

because f(t)=0 att<0 и g(t-t)=0 при t-t<0,т.е. приt>t.

Let us introduce the concept of mutual correlation of two functions f(t) and g(t).

where t is a time shift that continuously changes in the interval (-?,?).

An important concept is the correlation of a function with itself, which is called autocorrelation.

  • 2.8 Signal power and energy.

Let's move on to consider the concept of signal power and energy. The importance of these concepts is explained by the fact that any transfer of information is actually a transfer of energy.

Consider an arbitrary complex signal f(t).

The instantaneous signal power p(t) is determined by the equality

The total energy is equal to the integral of the instantaneous power over the entire period of signal existence:

Signal power can also be considered as a function of frequency. In this case, the instantaneous frequency power is denoted as.

The total signal energy is calculated by the formula

The total signal energy should not depend on the selected representation. The total energy values ​​calculated from the time and frequency representations must match. Therefore, equating the right sides, we obtain the equality:

This equality constitutes the content of Parseval's theorem for non-periodic signals. A rigorous proof of this theorem will be given when studying the topic “Generalized Functions”.

Similarly, expressing the interaction energy of two different signals f(t) and g(t) in time and frequency representation, we obtain:

Let us find out the mathematical meaning of Parseval's theorem.

From a mathematical point of view, the integral is the scalar product of the functions f(t) and g(t), denoted as (f,g). The quantity is called the norm of the function f(t) and is denoted as. Therefore, from Parseval’s theorem it follows that the scalar product is invariant under the Fourier transform, i.e.

Instantaneous signal power considered as a function of frequency, i.e. , has another generally accepted name - power spectrum. The power spectrum is the main mathematical tool of spectral analysis, which allows one to determine the frequency composition of a signal. In addition to the signal power spectrum, in practice the amplitude and phase spectra are used, defined respectively as:

  • 2.9 Wiener-Khinchin theorem.

The signal power spectrum density f(t) is equal to the Fourier transform of the autocorrelation function

The density of cross-spectral signals f(t) and g(t) is equal to the Fourier transform of the correlation function.

Both statements can be combined into one: Spectral density is equal to the Fourier transform of the correlation function.

The proof will be given later after introducing the concept of a generalized function.

I believe that everyone is generally aware of the existence of such a wonderful mathematical tool as the Fourier transform. However, for some reason it is taught so poorly in universities that relatively few people understand how this transformation works and how it should be used correctly. Meanwhile, the mathematics of this transformation is surprisingly beautiful, simple and elegant. I invite everyone to learn a little more about the Fourier transform and the related topic of how analog signals can be effectively converted into digital signals for computational processing.

Without using complex formulas and Matlab, I will try to answer the following questions:

  • FT, DTF, DTFT - what are the differences and how do seemingly completely different formulas give such conceptually similar results?
  • How to Correctly Interpret Fast Fourier Transform (FFT) Results
  • What to do if you are given a signal of 179 samples and the FFT requires an input sequence of length equal to a power of two
  • Why, when trying to obtain the spectrum of a sinusoid using Fourier, instead of the expected single “stick”, a strange squiggle appears on the graph and what can be done about it
  • Why are analog filters placed before the ADC and after the DAC?
  • Is it possible to digitize an ADC signal with a frequency higher than half the sampling frequency (the school answer is incorrect, the correct answer is possible)
  • How to restore the original signal using a digital sequence

I will proceed from the assumption that the reader understands what an integral is, a complex number (as well as its modulus and argument), convolution of functions, plus at least a “hands-on” idea of ​​what the Dirac delta function is. If you don’t know, no problem, read the above links. Throughout this text, by “product of functions” I will mean “pointwise multiplication”

We should probably start with the fact that the usual Fourier transform is some kind of thing that, as you can guess from the name, transforms one function into another, that is, it associates each function of a real variable x(t) with its spectrum or Fourier image y (w):

If we give analogies, then an example of a transformation similar in meaning can be, for example, differentiation, turning a function into its derivative. That is, the Fourier transform is essentially the same operation as taking the derivative, and it is often denoted in a similar way by drawing a triangular “cap” over the function. Only in contrast to differentiation, which can also be defined for real numbers, the Fourier transform always “works” with more general complex numbers. Because of this, problems constantly arise with displaying the results of this transformation, since complex numbers are determined not by one, but by two coordinates on a graph operating with real numbers. The most convenient way, as a rule, is to represent complex numbers in the form of a modulus and an argument and draw them separately as two separate graphs:

The graph of the argument of the complex value is often called in this case the “phase spectrum”, and the graph of the modulus is often called the “amplitude spectrum”. The amplitude spectrum is usually of much greater interest, and therefore the “phase” part of the spectrum is often skipped. In this article we will also focus on “amplitude” things, but we should not forget about the existence of the missing phase part of the graph. In addition, instead of the usual module of a complex value, its decimal logarithm multiplied by 10 is often drawn. The result is a logarithmic graph, the values ​​​​of which are displayed in decibels (dB).

Please note that not very negative numbers on the logarithmic graph (-20 dB or less) correspond to almost zero numbers on the “normal” graph. Therefore, the long and wide “tails” of various spectra on such graphs, when displayed in “ordinary” coordinates, as a rule, practically disappear. The convenience of such a strange at first glance representation arises from the fact that the Fourier images of various functions often need to be multiplied among themselves. With such pointwise multiplication of complex-valued Fourier images, their phase spectra are added, and their amplitude spectra are multiplied. The first is easy to do, while the second is relatively difficult. However, the logarithms of the amplitude add up when multiplying the amplitudes, so logarithmic amplitude graphs can, like phase graphs, simply be added pointwise. In addition, in practical problems it is often more convenient to operate not with the “amplitude” of the signal, but with its “power” (the square of the amplitude). On a logarithmic scale, both graphs (amplitude and power) look identical and differ only in the coefficient - all values ​​​​on the power graph are exactly twice as large as on the amplitude scale. Accordingly, to plot a graph of power distribution by frequency (in decibels), you can not square anything, but calculate the decimal logarithm and multiply it by 20.

Are you bored? Just wait a little longer, we'll be done with the boring part of the article explaining how to interpret graphs soon :). But before that, there is one extremely important thing to understand: although all of the above spectrum graphs were drawn for some limited ranges of values ​​(positive numbers in particular), all of these graphs actually continue to plus and minus infinity. The graphs simply depict some “most meaningful” part of the graph, which is usually mirrored for negative values ​​of the parameter and is often repeated periodically with a certain step when viewed on a larger scale.

Having decided what is drawn on the graphs, let's return to the Fourier transform itself and its properties. There are several different ways to define this transformation, differing in small details (different normalizations). For example, in our universities, for some reason, they often use the normalization of the Fourier transform, which defines the spectrum in terms of angular frequency (radians per second). I will use a more convenient Western formulation that defines the spectrum in terms of ordinary frequency (hertz). The direct and inverse Fourier transforms in this case are determined by the formulas on the left, and some properties of this transformation that we will need are determined by a list of seven points on the right:

The first of these properties is linearity. If we take some linear combination of functions, then the Fourier transform of this combination will be the same linear combination of the Fourier images of these functions. This property allows complex functions and their Fourier images to be reduced to simpler ones. For example, the Fourier transform of a sinusoidal function with frequency f and amplitude a is a combination of two delta functions located at points f and -f and with coefficient a/2:

If we take a function consisting of the sum of a set of sinusoids with different frequencies, then according to the property of linearity, the Fourier transform of this function will consist of a corresponding set of delta functions. This allows us to give a naive but visual interpretation of the spectrum according to the principle “if in the spectrum of a function frequency f corresponds to amplitude a, then the original function can be represented as a sum of sinusoids, one of which will be a sinusoid with frequency f and amplitude 2a.” Strictly speaking, this interpretation is incorrect, since the delta function and the point on the graph are completely different things, but as we will see later, for discrete Fourier transforms it will not be so far from the truth.

The second property of the Fourier transform is the independence of the amplitude spectrum from the time shift of the signal. If we move a function to the left or right along the x-axis, then only its phase spectrum will change.

The third property is that stretching (compressing) the original function along the time axis (x) proportionally compresses (stretches) its Fourier image along the frequency scale (w). In particular, the spectrum of a signal of finite duration is always infinitely wide and, conversely, the spectrum of finite width always corresponds to a signal of unlimited duration.

The fourth and fifth properties are perhaps the most useful of all. They make it possible to reduce the convolution of functions to a pointwise multiplication of their Fourier images, and vice versa - the pointwise multiplication of functions to the convolution of their Fourier images. A little further I will show how convenient this is.

The sixth property speaks of the symmetry of Fourier images. In particular, from this property it follows that in the Fourier transform of a real-valued function (i.e., any “real” signal), the amplitude spectrum is always an even function, and the phase spectrum (if brought to the range -pi...pi) is an odd one . It is for this reason that the negative part of the spectrum is almost never drawn on spectrum graphs - for real-valued signals it does not provide any new information (but, I repeat, it is not zero either).

Finally, the last, seventh property, says that the Fourier transform preserves the “energy” of the signal. It is meaningful only for signals of finite duration, the energy of which is finite, and suggests that the spectrum of such signals at infinity quickly approaches zero. It is precisely because of this property that spectrum graphs usually depict only the “main” part of the signal, which carries the lion’s share of the energy - the rest of the graph simply tends to zero (but, again, is not zero).

Armed with these 7 properties, let's look at the mathematics of signal “digitization”, which allows you to convert a continuous signal into a sequence of numbers. To do this, we need to take a function known as the “Dirac comb”:

A Dirac comb is simply a periodic sequence of delta functions with unity coefficient, starting at zero and proceeding with step T. To digitize signals, T is chosen as small a number as possible, T<<1. Фурье-образ этой функции - тоже гребенка Дирака, только с гораздо большим шагом 1/T и несколько меньшим коэффициентом (1/T). С математической точки зрения, дискретизация сигнала по времени - это просто поточечное умножение исходного сигнала на гребенку Дирака. Значение 1/T при этом называют частотой дискретизации:

Instead of a continuous function, after such multiplication, a sequence of delta pulses of a certain height is obtained. Moreover, according to property 5 of the Fourier transform, the spectrum of the resulting discrete signal is a convolution of the original spectrum with the corresponding Dirac comb. It is easy to understand that, based on the properties of convolution, the spectrum of the original signal is “copied” an infinite number of times along the frequency axis with a step of 1/T, and then summed.

Note that if the original spectrum had a finite width and we used a sufficiently high sampling frequency, then the copies of the original spectrum will not overlap, and therefore will not sum with each other. It is easy to understand that from such a “collapsed” spectrum it will be easy to restore the original one - it will be enough to simply take the spectrum component in the region of zero, “cutting off” the extra copies going to infinity. The simplest way to do this is to multiply the spectrum by a rectangular function equal to T in the range -1/2T...1/2T and zero outside this range. Such a Fourier transform corresponds to the function sinc(Tx) and according to property 4, such a multiplication is equivalent to the convolution of the original sequence of delta functions with the function sinc(Tx)



That is, using the Fourier transform, we have a way to easily reconstruct the original signal from a time-sampled one, working provided that we use a sampling frequency that is at least twice (due to the presence of negative frequencies in the spectrum) higher than the maximum frequency present in the original signal. This result is widely known and is called the “Kotelnikov/Shannon-Nyquist theorem”. However, as it is easy to notice now (understanding the proof), this result, contrary to the widespread misconception, determines sufficient, but not necessary condition for restoring the original signal. All we need is to ensure that the part of the spectrum that interests us after sampling the signal does not overlap each other, and if the signal is sufficiently narrowband (has a small “width” of the non-zero part of the spectrum), then this result can often be achieved at a sampling frequency much lower than twice the maximum frequency of the signal. This technique is called “undersampling” (subsampling, bandpass sampling) and is quite widely used in processing all kinds of radio signals. For example, if we take an FM radio operating in the frequency band from 88 to 108 MHz, then to digitize it we can use an ADC with a frequency of only 43.5 MHz instead of the 216 MHz assumed by Kotelnikov’s theorem. In this case, however, you will need a high-quality ADC and a good filter.

Let me note that “duplication” of high frequencies with frequencies of lower orders (aliasing) is an immediate property of signal sampling that irreversibly “spoils” the result. Therefore, if the signal can, in principle, contain high-order frequencies (that is, almost always), an analog filter is placed in front of the ADC, “cutting off” everything unnecessary directly in the original signal (since after sampling it will be too late to do this). The characteristics of these filters, as analog devices, are not ideal, so some “damage” to the signal still occurs, and in practice it follows that the highest frequencies in the spectrum are, as a rule, unreliable. To reduce this problem, the signal is often oversampled, setting the input analog filter to a lower bandwidth and using only the lower part of the theoretically available frequency range of the ADC.

Another common misconception, by the way, is when the signal at the DAC output is drawn in “steps”. “Steps” correspond to the convolution of a sampled signal sequence with a rectangular function of width T and height 1:

The signal spectrum with this transformation is multiplied by the Fourier image of this rectangular function, and for a similar rectangular function it is again sinc(w), “stretched” the more, the smaller the width of the corresponding rectangle. The spectrum of the sampled signal with such a “DAC” is multiplied point by point by this spectrum. In this case, unnecessary high frequencies with “extra copies” of the spectrum are not completely cut off, but the upper part of the “useful” part of the spectrum, on the contrary, is attenuated.

In practice, of course, no one does this. There are many different approaches to constructing a DAC, but even in the closest weighting-type DACs, the rectangular pulses in the DAC, on the contrary, are chosen to be as short as possible (approaching the real sequence of delta functions) in order to avoid excessive suppression of the useful part of the spectrum. “Extra” frequencies in the resulting broadband signal are almost always canceled out by passing the signal through an analog low-pass filter, so that there are no “digital steps” either “inside” the converter, or, especially, at its output.

However, let's go back to the Fourier transform. The Fourier transform described above applied to a pre-sampled signal sequence is called the Discrete Time Fourier Transform (DTFT). The spectrum obtained by such a transformation is always 1/T-periodic, therefore the DTFT spectrum is completely determined by its values ​​on the segment )

Share