There is a dataset, it is divided into signs – x and on the arms – y. There is a model – ridge with configured hyper parameters. I can check the accuracy of predicting this model using the Cross_Val_Score function?
x = dataset [['iw', 'if', 'vw', 'fp']]]. Values y = dataset [['depth', 'width']]. Values model_ridge = ridge (alpha = [0.001]) Results = Cross_Val_Score (Model_Ridge, X, Y, CV = 4, ScORING = 'R2') RESULTS. # Exit: Array ([0.44374476, 0.39469688, 0.26293681, -0.05665834]) Results.mean () # Exit: 0.261180024048342
The quality of this model is 26%?
And I understand correctly how Cross_Val_SCore works, namely, that in it splits x and y on 4 parts, teaches the model on the 3/4 parts and checks it on 1/4 of the part?
Answer 1, Authority 100%
Do I understand how to work cross_val_score, namely, that in
It breaks X and Y into 4 parts, teaches the model on 3/4 parts and
Checks it on 1/4 parts?
Yes, you understand everything correctly. For CV = 4, everything that works.
Judging by the results of Cross-Validation, R ^ 2 Score differs very much for different parts of the dataset. And in your case it is constantly worse. You can try to shock datases before making Cross-Validation.
& gt; & gt; & gt; From Sklearn.Model_Selection Import Shufflesplit & gt; & gt; & gt; N_SAMPLES = X.SHAPE  & gt; & gt; & gt; CV = SHUFFLESPLIT (N_SPLITS = 5, TEST_SIZE = 0.3, RANDOM_STATE = 0) & gt; & gt; & gt; Cross_Val_Score (CLF, X, Y, CV = CV)