【AutoGluon】【表形式データの分類】KaggleのTitanicに再挑戦

はじめに

約1年前にも同様のことをしています。
touch-sp.hatenablog.com
MXNetもAutoGluonもバージョンが新しくなっているので1年ぶりに再トライしました。

学習データを使って学習

今回は presets='best_quality' を追加しました。

import pandas as pd
from autogluon.tabular import TabularDataset, TabularPredictor

all_data = pd.read_csv('train.csv', index_col=0)

train_data = TabularDataset(all_data)

save_path = 'ag-predict'
predictor = TabularPredictor(label='Survived', path=save_path).fit(
    train_data = train_data,
    presets='best_quality'
    )

モデルは自動的に保存されます。


出力結果は記事の最後に載せておきます。

推論と提出用CSVファイル作成

保存された学習済みモデルを読み込んでテストデータに対して推論を行います。
結果はKaggle提出用にCSVで保存するようにしています。

import pandas as pd
from autogluon.tabular import TabularDataset, TabularPredictor

all_data = pd.read_csv('test.csv', index_col=0)

test_data = TabularDataset(all_data)

save_path = 'ag-predict'
predictor = TabularPredictor.load(save_path)

y_pred = predictor.predict(test_data)

result = pd.DataFrame(y_pred, columns=['Survived'], index = all_data.index)
result.to_csv('result.csv')

Kaggleに提出、その結果は?

出来上がった「result.csv」を提出しました。
結果は・・・

0.77033
8969位 / 13844

分母が以前より小さくなっているのはなぜでしょう?
正解率は前回よりわずかに改善しています。


欠損値の補完などなにも考えていないのでこんなものでしょう。
モデルの選定すら自分ではしていません。
すべてAutoGluonまかせです。

学習結果のサマリー

学習結果のサマリーを見るときはこのようにします。

from autogluon.tabular import TabularPredictor

save_path = 'ag-predict'
predictor = TabularPredictor.load(save_path)

summary = predictor.fit_summary()

実行すると以下のように出力されます。

*** Summary of fit() ***
Estimated performance of each model:
                      model  score_val  pred_time_val   fit_time  pred_time_val_marginal  fit_time_marginal  stack_level  can_infer  fit_order
0           LightGBM_BAG_L2   0.879910       1.190925  17.071198                0.037570           0.910563            2       True         16
1       WeightedEnsemble_L3   0.879910       1.192192  17.354336                0.001267           0.283138            3       True         26
2           CatBoost_BAG_L2   0.877666       1.198847  18.396205                0.045492           2.235571            2       True         19
3         LightGBMXT_BAG_L2   0.872054       1.188943  16.984035                0.035589           0.823401            2       True         15
4            XGBoost_BAG_L2   0.872054       1.199248  16.953660                0.045893           0.793026            2       True         23
5    NeuralNetFastAI_BAG_L2   0.870932       1.270180  18.256152                0.116825           2.095518            2       True         22
6       WeightedEnsemble_L2   0.869809       0.246599   3.910404                0.001069           0.314337            2       True         14
7      LightGBMLarge_BAG_L2   0.868687       1.197225  18.086509                0.043870           1.925874            2       True         25
8    NeuralNetFastAI_BAG_L1   0.865320       0.084898   1.929984                0.084898           1.929984            1       True         10
9   RandomForestEntr_BAG_L2   0.856341       1.211845  16.921467                0.058490           0.760833            2       True         18
10    ExtraTreesGini_BAG_L2   0.855219       1.213233  16.958869                0.059878           0.798235            2       True         20
11    NeuralNetMXNet_BAG_L2   0.855219       1.821195  23.737728                0.667841           7.577094            2       True         24
12  RandomForestGini_BAG_L2   0.852974       1.211821  16.983857                0.058466           0.823223            2       True         17
13    ExtraTreesEntr_BAG_L2   0.852974       1.212818  16.948936                0.059464           0.788302            2       True         21
14          CatBoost_BAG_L1   0.847363       0.073447   1.556921                0.073447           1.556921            1       True          7
15          LightGBM_BAG_L1   0.843996       0.032330   0.676245                0.032330           0.676245            1       True          4
16     LightGBMLarge_BAG_L1   0.843996       0.053450   1.562874                0.053450           1.562874            1       True         13
17           XGBoost_BAG_L1   0.837262       0.043221   0.550573                0.043221           0.550573            1       True         11
18        LightGBMXT_BAG_L1   0.835017       0.035059   0.682306                0.035059           0.682306            1       True          3
19  RandomForestGini_BAG_L1   0.826038       0.072360   0.859407                0.072360           0.859407            1       True          5
20  RandomForestEntr_BAG_L1   0.822671       0.079357   0.803799                0.079357           0.803799            1       True          6
21    NeuralNetMXNet_BAG_L1   0.820426       0.534656   5.913800                0.534656           5.913800            1       True         12
22    ExtraTreesEntr_BAG_L1   0.818182       0.058779   0.812680                0.058779           0.812680            1       True          9
23    ExtraTreesGini_BAG_L1   0.818182       0.066245   0.806615                0.066245           0.806615            1       True          8
24    KNeighborsDist_BAG_L1   0.670034       0.008914   0.002877                0.008914           0.002877            1       True          2
25    KNeighborsUnif_BAG_L1   0.659933       0.010638   0.002553                0.010638           0.002553            1       True          1
Number of models trained: 26
Types of models trained:
{'StackerEnsembleModel_KNN', 'StackerEnsembleModel_LGB', 'StackerEnsembleModel_NNFastAiTabular', 'StackerEnsembleModel_TabularNeuralNet', 'StackerEnsembleModel_XGBoost', 'WeightedEnsembleModel', 'StackerEnsembleModel_RF', 'StackerEnsembleModel_XT', 'StackerEnsembleModel_CatBoost'}
Bagging used: True  (with 8 folds)
Multi-layer stack-ensembling used: True  (with 3 levels)
Feature Metadata (Processed):
(raw dtype, special dtypes):
('category', [])                    : 3 | ['Ticket', 'Cabin', 'Embarked']
('float', [])                       : 2 | ['Age', 'Fare']
('int', [])                         : 3 | ['Pclass', 'SibSp', 'Parch']
('int', ['binned', 'text_special']) : 9 | ['Name.char_count', 'Name.word_count', 'Name.capital_ratio', 'Name.lower_ratio', 'Name.special_ratio', ...]
('int', ['bool'])                   : 1 | ['Sex']
('int', ['text_ngram'])             : 9 | ['__nlp__.henry', '__nlp__.john', '__nlp__.master', '__nlp__.miss', '__nlp__.mr', ...]
/mnt/wsl/PHYSICALDRIVE2p1/legacymxnet/lib/python3.8/site-packages/autogluon/core/utils/plots.py:138: UserWarning: AutoGluon summary plots cannot be created because bokeh is not installed. To see plots, please do: "pip install bokeh==2.0.1"
  warnings.warn('AutoGluon summary plots cannot be created because bokeh is not installed. To see plots, please do: "pip install bokeh==2.0.1"')
*** End of fit() summary ***

プロットしたければbokehをインストールするようにと警告がでます。
インストールした後に再度実行すると「SummaryOfModels.html」が保存されます。
開くと以下のような図になっています。
f:id:touch-sp:20220105095557p:plain

動作環境

Intel Core i7-7700K
RAM 32G
NVIDIA GTX 1080 (VRAM 8G)
Ubuntu 20.04 on WSL2
Python 3.8.10
autogluon==0.3.2b20220104
mxnet-cu112==1.9.0

出力結果

学習した時の出力です。

Presets specified: ['best_quality']
Beginning AutoGluon training ...
AutoGluon will save models to "ag-predict/"
AutoGluon Version:  0.3.2b20220104
Python Version:     3.8.10
Operating System:   Linux
Train Data Rows:    891
Train Data Columns: 10
Preprocessing data ...
AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed).
        2 unique label values:  [0, 1]
        If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression'])
Selected class <--> label mapping:  class 1 = 1, class 0 = 0
Using Feature Generators to preprocess the data ...
Fitting AutoMLPipelineFeatureGenerator...
        Available Memory:                    6555.41 MB
        Train Data (Original)  Memory Usage: 0.31 MB (0.0% of available memory)
        Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
        Stage 1 Generators:
                Fitting AsTypeFeatureGenerator...
                        Note: Converting 1 features to boolean dtype as they only contain 2 unique values.
        Stage 2 Generators:
                Fitting FillNaFeatureGenerator...
        Stage 3 Generators:
                Fitting IdentityFeatureGenerator...
                Fitting CategoryFeatureGenerator...
                        Fitting CategoryMemoryMinimizeFeatureGenerator...
                Fitting TextSpecialFeatureGenerator...
                        Fitting BinnedFeatureGenerator...
                        Fitting DropDuplicatesFeatureGenerator...
                Fitting TextNgramFeatureGenerator...
                        Fitting CountVectorizer for text features: ['Name']
                        CountVectorizer fit with vocabulary size = 8
        Stage 4 Generators:
                Fitting DropUniqueFeatureGenerator...
        Types of features in original data (raw dtype, special dtypes):
                ('float', [])        : 2 | ['Age', 'Fare']
                ('int', [])          : 3 | ['Pclass', 'SibSp', 'Parch']
                ('object', [])       : 4 | ['Sex', 'Ticket', 'Cabin', 'Embarked']
                ('object', ['text']) : 1 | ['Name']
        Types of features in processed data (raw dtype, special dtypes):
                ('category', [])                    : 3 | ['Ticket', 'Cabin', 'Embarked']
                ('float', [])                       : 2 | ['Age', 'Fare']
                ('int', [])                         : 3 | ['Pclass', 'SibSp', 'Parch']
                ('int', ['binned', 'text_special']) : 9 | ['Name.char_count', 'Name.word_count', 'Name.capital_ratio', 'Name.lower_ratio', 'Name.special_ratio', ...]
                ('int', ['bool'])                   : 1 | ['Sex']
                ('int', ['text_ngram'])             : 9 | ['__nlp__.henry', '__nlp__.john', '__nlp__.master', '__nlp__.miss', '__nlp__.mr', ...]
        0.2s = Fit runtime
        10 features in original data used to generate 27 features in processed data.
        Train Data (Processed) Memory Usage: 0.07 MB (0.0% of available memory)
Data preprocessing and feature engineering runtime = 0.16s ...
AutoGluon will gauge predictive performance using evaluation metric: 'accuracy'
        To change this, specify the eval_metric argument of fit()
AutoGluon will fit 2 stack levels (L1 to L2) ...
Fitting 13 L1 models ...
Fitting model: KNeighborsUnif_BAG_L1 ...
        0.6599   = Validation score   (accuracy)
        0.0s     = Training   runtime
        0.01s    = Validation runtime
Fitting model: KNeighborsDist_BAG_L1 ...
        0.67     = Validation score   (accuracy)
        0.0s     = Training   runtime
        0.01s    = Validation runtime
Fitting model: LightGBMXT_BAG_L1 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.835    = Validation score   (accuracy)
        0.68s    = Training   runtime
        0.04s    = Validation runtime
Fitting model: LightGBM_BAG_L1 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.844    = Validation score   (accuracy)
        0.68s    = Training   runtime
        0.03s    = Validation runtime
Fitting model: RandomForestGini_BAG_L1 ...
        0.826    = Validation score   (accuracy)
        0.86s    = Training   runtime
        0.07s    = Validation runtime
Fitting model: RandomForestEntr_BAG_L1 ...
        0.8227   = Validation score   (accuracy)
        0.8s     = Training   runtime
        0.08s    = Validation runtime
Fitting model: CatBoost_BAG_L1 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.8474   = Validation score   (accuracy)
        1.56s    = Training   runtime
        0.07s    = Validation runtime
Fitting model: ExtraTreesGini_BAG_L1 ...
        0.8182   = Validation score   (accuracy)
        0.81s    = Training   runtime
        0.07s    = Validation runtime
Fitting model: ExtraTreesEntr_BAG_L1 ...
        0.8182   = Validation score   (accuracy)
        0.81s    = Training   runtime
        0.06s    = Validation runtime
Fitting model: NeuralNetFastAI_BAG_L1 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.8653   = Validation score   (accuracy)
        1.93s    = Training   runtime
        0.08s    = Validation runtime
Fitting model: XGBoost_BAG_L1 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.8373   = Validation score   (accuracy)
        0.55s    = Training   runtime
        0.04s    = Validation runtime
Fitting model: NeuralNetMXNet_BAG_L1 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.8204   = Validation score   (accuracy)
        5.91s    = Training   runtime
        0.53s    = Validation runtime
Fitting model: LightGBMLarge_BAG_L1 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.844    = Validation score   (accuracy)
        1.56s    = Training   runtime
        0.05s    = Validation runtime
Fitting model: WeightedEnsemble_L2 ...
        0.8698   = Validation score   (accuracy)
        0.31s    = Training   runtime
        0.0s     = Validation runtime
Fitting 11 L2 models ...
Fitting model: LightGBMXT_BAG_L2 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.8721   = Validation score   (accuracy)
        0.82s    = Training   runtime
        0.04s    = Validation runtime
Fitting model: LightGBM_BAG_L2 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.8799   = Validation score   (accuracy)
        0.91s    = Training   runtime
        0.04s    = Validation runtime
Fitting model: RandomForestGini_BAG_L2 ...
        0.853    = Validation score   (accuracy)
        0.82s    = Training   runtime
        0.06s    = Validation runtime
Fitting model: RandomForestEntr_BAG_L2 ...
        0.8563   = Validation score   (accuracy)
        0.76s    = Training   runtime
        0.06s    = Validation runtime
Fitting model: CatBoost_BAG_L2 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.8777   = Validation score   (accuracy)
        2.24s    = Training   runtime
        0.05s    = Validation runtime
Fitting model: ExtraTreesGini_BAG_L2 ...
        0.8552   = Validation score   (accuracy)
        0.8s     = Training   runtime
        0.06s    = Validation runtime
Fitting model: ExtraTreesEntr_BAG_L2 ...
        0.853    = Validation score   (accuracy)
        0.79s    = Training   runtime
        0.06s    = Validation runtime
Fitting model: NeuralNetFastAI_BAG_L2 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.8709   = Validation score   (accuracy)
        2.1s     = Training   runtime
        0.12s    = Validation runtime
Fitting model: XGBoost_BAG_L2 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.8721   = Validation score   (accuracy)
        0.79s    = Training   runtime
        0.05s    = Validation runtime
Fitting model: NeuralNetMXNet_BAG_L2 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.8552   = Validation score   (accuracy)
        7.58s    = Training   runtime
        0.67s    = Validation runtime
Fitting model: LightGBMLarge_BAG_L2 ...
        Fitting 8 child models (S1F1 - S1F8)
ParallelLocalFoldFittingStrategy is used to fit folds
        0.8687   = Validation score   (accuracy)
        1.93s    = Training   runtime
        0.04s    = Validation runtime
Fitting model: WeightedEnsemble_L3 ...
        0.8799   = Validation score   (accuracy)
        0.28s    = Training   runtime
        0.0s     = Validation runtime
AutoGluon training complete, total runtime = 53.65s ...
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("ag-predict/")

すべての学習に1分かかっていないのが驚きです。