I have a question on the BPF of the audio signal. I want to draw a chart based on the audio signal. But I am not strong in this topic. I want advice from experienced comrades where to start.

I have a few questions:

1) What is samples (samples)? For example, when using the SFML multimedia library to obtain the samples, the following design is used:

```
sf :: buffer buffer;
Buffer.loadfromfile ("sound.wav");
Const SF :: INT16 * INPUT = Buffer.getSamples ();
```

So here I understand the samples this binary presentation of the sound file? I understand correctly that the above code is the same as:

```
typeedef short int16;
INT16 * LOAD ()
{
File * FP;
if ((FP = Fopen ("Sound.wav", "Rb")) == NULL) {
Printf ("Error when opening a file. \ n");
}
FSEEK (FP, 0, SEEK_END);
LONG N = FTELL (FP);
FSEEK (FP, 0, SEEK_SET);
int16 * a = new int16 [n];
for (i = 0; i & lt; n; i ++)
IF (Fread (A [I],
SizeOF (A), 1, FP)! = 1) {
if (FEOF (FP)) Break;
Printf ("Error reading a file. \ n");
}
FClose (FP);
RETURN A;
}
INT16 * INPUT = LOAD ();
```

i.e. Get samples, is the same thing to get a binary view of a file in a 16-bit type? Or do I misunderstand?

2) Question 2: I know that there is a fast FFTW library for the BPF. Also in the VikitoBube there is an implementation of the BPF algorithm on C++ (link algorithm: https://ru.wikiBooks.org/wiki/Dealization_Algorithm/Strete_Preservation_FURE#C.2B.2B ). The question is as follows exactly what is supplied to the entrance in this algorithm? An array with a binary view of a file, or samples taken using SFML? This algorithm assumes that an array with analyzed data and an array with transformed data is transmitted as parameters. As analyzed data, what is meant? And what if the size of the array is not equal to the degree of two? Is this a prerequisite?

I understand that the algorithm obtained is given by the algorithm given by the link, only converted to Double?

```
int16 * input [n];
Double * In = New Double [n];
in = input;
double * Out = New Double [n];
Void FFTANALYSIS (IN, OUT, N, N)
```

In this example, I understand N is a rammer of the sample massif, i.e. Sizeof (Input);

In the same SFML library, the number of samples is so:

```
unsigned long long n = buffer.getsamplecount ();
```

I still do not understand why the size of the sample massif is not equal to the number of samples, for example, when I do this:

```
sf :: buffer buffer;
Buffer.loadfromfile ("sound.wav");
unsigned long long n = buffer.getsamplecount ();
Const SF :: INT16 * RAW = NEW SF :: INT16 [N];
raw = buffer.getsamples ();
PrintF ("% HU", SizeOF (RAW)); // == 4 - Why?, if n six-digit
```

To the function: Void FFTANALYSIS (IN, OUT, N, N) is transmitted array of analyzed data, and an array of where the converted data is recorded (Double In and Double Out). A n is the size of these arrays (the number of samples). So, by condition N, there should always be a multiple degree of twos. And how to be if the number of samples is not a multiple degree of twos?

3) When we get an array of converted data, based on what data to draw a schedule? As coordinates of its vertices (points), take an imaginary part of the spectrum (i.e., the elements of the converted data array) or the power spectrum?

In other words, some data set to take the coordinates of the sinusoid dots?

Must happen a schedule, but I do not understand what to take the coordinates of the points? When I use the SFML library to get the number of samples and my samples yourself, and displaced them in the cycle through the printf (“% HU”, RAW [I]); In the console, then I see almost some zeros with rare units. How do these data be based on what kind of figure? So samples need to be pre-processed before taking them for the coordinates of the points?

## Answer 1, Authority 100%

If it comes to an uncompressed sound file, then the sample is the countdown obtained by digitizing the signal, i.e. Just instantaneous analog signal amplitude value. On these references, you can build your “sinusoid”, i.e. Signal representation in **temporary area **.

BPF same, this is a quick Fourier transformation by running it we get the display of the signal in the **frequency domain **(decomposition of the frequencies). As a specific library is implemented, I do not know, but for the BPF algorithm is needed multiplicity 2. Although it is possible to finish the noliki at the end, well, oh well, this is already COS.

I recommend you to get acquainted with the theoretical side of the question to clearly understand what you are doing. Successes)