Newbie in Python and do not quite understand how I can count the average relative error of approximation by the

```
import pandas as pd
Import Math.
From Sklearn Import SVM
From Sklearn Import Preprocessing
DF = PD.Read_CSV ('file1.csv', ";", Header = None)
X_train = df.drop ([16,17], axis = 1)
Y_train = df [16]
test_data = pd.read_csv ('file2.csv', ";", Header = none)
X_test = test_data.drop ([16,17], axis = 1)
Y_Test = Test_Data [16]
normalized_x_train = preprocessing.normalize (x_train)
Normalized_x_Test = Preprocessing.Normalize (X_TEST)
xgb_model = svm.svr (kernel = 'linear', c = 1000.0)
CL = XGB_MODEL.FIT (Normalized_X_TRAIN, Y_TRAIN)
PREDICTIONS = Cl.Predict (Normalized_X_TEST)
```

Is there any finished function to get this error or only a cycle? If a cycle, then you need to normalize y_test – real values?

## Answer 1, Authority 100%

You can:

```
from sklearn.metrics import mean_absolute_error
MAPE = Mean_absolute_error (Y_Test, Y_Predicted) / Y_Test.abs (). SUM ()
```

If you need percentages, then MAPE must be multiplied by 100.

PS It is also worth mentioning that this metric is rarely used in practice. It can cause division to zero.