在kaggle代码中第一次看到GroupKFold,不太清楚和KFold的区别,所以才想着去搞清楚这个问题
二、KFold>>> import numpy as np> >> from sklearn.model_selection import KFold>>> X = ["a", "b", "c", "d"]>>> kf = KFold(n_splits=2)>>> for train, test in kf.split(X):..、 print("%s %s" % (train, test))[2 3] [0 1][0 1] [2 3]
三、Stratified k-foldStratifiedKFold是k-fold的一个变体,它会根据数据集的分布来划分,使得 划分后的数据集的目标比例和原始数据集近似。
>>> from sklearn.model_selection import StratifiedKFold>>> X = np.ones(10)>>> y = [0, 0, 0, 0, 1, 1, 1, 1, 1, 1]>>> skf = StratifiedKFold(n_splits=3)>>> for train, test in skf.split(X, y):..、 print("%s %s" % (train, test))[2 3 6 7 8 9] [0 1 4 5][0 1 3 4 5 8 9] [2 6 7][0 1 2 4 5 6 7] [3 8 9]
四、Group k-foldGroupKFold 会保证同一个group的数据不会同时出现在训练集和测试集上。因为如果训练集中包含了每个group的几个样例,可能训练得到的模型能够足够灵活地从这些样例中学习到特征,在测试集上也会表现很好。但一旦遇到一个新的group它就会表现很差。
>>> from sklearn.model_selection import GroupKFold>>> X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 8.8, 9, 10]>>> y = ["a", "b", "b", "b", "c", "c", "c", "d", "d", "d"]>>> groups = [1, 1, 1, 2, 2, 2, 3, 3, 3, 3]>>> gkf = GroupKFold(n_splits=3)>>> for train, test in gkf.split(X, y, groups=groups):..、 print("%s %s" % (train, test))[0 1 2 3 4 5] [6 7 8 9][0 1 2 6 7 8 9] [3 4 5][3 4 5 6 7 8 9] [0 1 2]
问题:应用中还未发现GroupKFold的使用价值,后续发现补上…… 来源:https://zhuanlan.zhihu.com/p/52515873
图片参考:https://scikit-learn.org/stable/auto_examples/model_selection/plot_cv_indices.html#sphx-glr-auto-examples-model-selection-plot-cv-indices-py