{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Lab 3: Cross validation and feature encoding." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data you will be using in this lab comes from a kaggle in class competition. The goal is to predict the number of shares an article will get on social media, from the article's topic, length, day of publication, and many other features.\n", "\n", "You are given labels, that is, number of shares, for 5000 of these articles. We will explore how to set up cross validation and do some feature encoding." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "%pylab inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 1. Model Selection: setting up a cross validation\n", "\n", "Cross-validation is a good way to perform model selection empirically while avoiding overfitting. \n", "\n", "This procedure can be split into the following two steps: \n", "* the dataset is randomly split into K folds \n", "* the model is run K times, each run using K-1 folds as the training set and evaluating the performance on the remaining fold which is the test set. \n", "\n", "Prediction performance are averaged over all folds. \n", "\n", "When the model contains parameters that need to be tuned, the CV scheme is repeated for all considered values of the hyperparameters, and those leading to the best prediction performance averaged on all folds are retained.\n", "\n", "Depending on the size of the dataset, 5 or 10 folds are usualy considered." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Question:__ In a K-fold cross-validation, how many times does each sample appear in a test set? In a training set? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Answer:__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question:** Implement a function which splits the _indices_ of the training data in K folds." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def make_Kfolds(n_instance, n_folds):\n", " \"\"\"\n", " set up a K-fold croos-validation.\n", " \n", " Parameters:\n", " -----------\n", " n_instances: int\n", " the number of instances in the dataset.\n", " n_folds: int\n", " the number of folds of the cross-validation scheme\n", " \n", " Outputs:\n", " --------\n", " fold_list: list\n", " list of folds, a fold is a tuple of 2 lists, \n", " the first one containing the indices of instances of the training set,\n", " the second one containing the indices of instances of the test set\n", " \"\"\"\n", " # Create a list of the n_instance indices [0, 1, ..., n_instance-1]\n", " list_indices = # TODO \n", " # Shuffle the list with np.random.shuffle\n", " # TODO\n", " \n", " # Compute the number of instances per fold (i.e. in each test set)\n", " n_instance_per_fold = # TODO\n", " print(n_instance_per_fold)\n", " \n", " # For each of the first K-1 folds, create the list of train set and test set indices\n", " fold_list = []\n", " for ind_fold in range(n_folds-1):\n", " test_list = # TODO\n", " train_list = # TODO\n", " # TODO add the (train_list, test_list) tuple to fold_list\n", "\n", " # Process the last fold separately\n", " test_list = # TODO \n", " train_list = # TODO \n", " # TODO add the (train_list, test_list) tuple to fold_list \n", " \n", " return fold_list" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Check whether your function does what is expected\n", "perso_folds = make_Kfolds(1001, 5)\n", "\n", "for ix, (tr, te) in enumerate(perso_folds):\n", " print(\"Fold %d\" % ix)\n", " print(\"\\t %d training points\" % len(tr))\n", " print(\"\\t %d test points\" % len(te))\n", " if len(np.intersect1d(tr, te))>0:\n", " print('some instances are both in your training and test sets')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In practice, when using scikit-learn, you will not implement your cross-validation yourself, but rather rely on the library's functionalities for setting up cross-validation schemes. \n", "\n", "[Here](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.model_selection) is the list of available tools in the scikit-learn library.\n", "\n", "We list here one of the most important ones:\n", "* [K-fold](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html#sklearn.model_selection.KFold): Provides train/test indices to split data in train/test sets by dataset into k consecutive folds (without shuffling by default). \n", "* [stratified K-fold](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html#sklearn.model_selection.StratifiedKFold) (to be used in case of classification): this cross-validation object is a variation of KFold that returns stratified folds. The folds are made by preserving the percentage of samples for each class.\n", "\n", "We will now explore the stratified K-fold on randomly generated data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Generate random data\n", "n_instances, n_features = 1000, 7\n", "# Design matrix\n", "X = np.random.random((n_instances, n_features))\n", "# Classification labels\n", "y = np.where(np.random.random(n_instances) >=0.5, 1, 0)\n", "print(y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question:** Using scikit-learn, set up a stratified 10-fold cross-validation for the above data." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "from sklearn import model_selection\n", "# Initialize a StratifiedKFold object \n", "skf = # TODO\n", "# Split the data using skf\n", "sk_folds = # TODO" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# This is one way to access the training and test points\n", "for ix, (tr, te) in enumerate(sk_folds):\n", " print(\"Fold %d\" % ix)\n", " print(\"\\t %d training points\" % len(tr))\n", " print(\"\\t %d test points\" % len(te))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Important note:__ `sk_folds` is a [_generator_](https://wiki.python.org/moin/Generators), meaning that once you are done looping through it, it will be empty. In practice it avoids storing all the indices (if you were doing 10-fold cross-validation on a million sample, you would have $10^7$ values to store)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question:** Create a cross-validation function that takes a design matrix, label array, scikit-learn classifier, and scikit-learn cross_validation object and returns the corresponding list of cross-validated predictions. \n", "\n", "The function contains a loop that goes through all folds and for each fold:\n", "* trains a model on the training data\n", "* uses this model to make predictions on the test data. \n", "In this fashion you should be able to form *a single vector of predictions* `y_prob_cv` (as each point from the data appears once as a test point in the cross-validation).\n", "\n", "Make sure that you are returning the predictions in the correct order!\n", "\n", "Check the documentation of fit(X, y) and predict_proba(X) in [sklearn.naive_bayes.GaussianNB](http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html). Every classifier implemented in scikit-learn has a fit(X,y) and a predict_proba(X) methods. \n", "Note that the predict_proba methods returns a 2 dimentional array, you must find a way to only keep the probability to belong to the positive class." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "def cross_validate(design_matrix, labels, classifier, cv_folds):\n", " \"\"\" Perform a cross-validation and returns the predictions.\n", " \n", " Parameters:\n", " -----------\n", " design_matrix: (n_samples, n_features) np.array\n", " Design matrix for the experiment.\n", " labels: (n_samples, ) np.array\n", " Vector of labels.\n", " classifier: sklearn classifier object\n", " Classifier instance; must have the following methods:\n", " - fit(X, y) to train the classifier on the data X, y\n", " - predict_proba(X) to apply the trained classifier to the data X and return probability estimates \n", " cv_folds: sklearn cross-validation object\n", " Cross-validation iterator.\n", " \n", " Return:\n", " -------\n", " pred: (n_samples, ) np.array\n", " Vectors of predictions (same order as labels).\n", " \"\"\"\n", " pred = np.zeros(labels.shape)\n", " for tr, te in cv_folds:\n", " # TODO\n", " return pred" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# To check whether your function runs properly, you can use the following\n", "\n", "# import Gaussian Naive Bayes\n", "from sklearn.naive_bayes import GaussianNB\n", "from sklearn import metrics\n", "\n", "# create a GNB classifier\n", "gnb = GaussianNB()\n", "\n", "# run your cross_validate function\n", "y_prob_cv = cross_validate(X, y, gnb, sk_folds)\n", "\n", "# check y and y_prob_cv have the same length (the number of instance)\n", "print(len(y_prob_cv))\n", "print(len(y))\n", "\n", "# check the accuracy of your prediction (it should be close to 0.5 as we're considering random matrices). \n", "print(metrics.accuracy_score(y, np.where(y_prob_cv>=0.5, 1, 0)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "** Extensions **\n", "* **Leave-one-out cross-validation: ** in this case, the number of folds is the number of available points in the dataset. To say it differently, the model is trained K times on K-1 points, and tested on the left out point. The LOO CV scheme is particularly convenient when the number of samples is very small. When the number of samples is large, it becomes computationally burdensome; moreover the cross-validated error tends to have a very large variance which makes it hard to interpret.\n", "\n", "* **Nested-cross-validation: ** The goal of the cross validation scheme is to assess the performance of the model on _new_ data which were not used to train or optimize the model. From that perspective, the CV scheme is not rigorous when optimizing hyperparameters. Indeed, the test data are both used to assess the performance and choosing the set of parameters which led to that best performance. To avoid selecting a possibly over-fitted set of parameters, we also used the so-called nested cross validation (_Nested CV_) scheme which consists in a cross validation (_inner-CV_) nested in a other cross validation (_outer-CV_). At each step of the _outer-CV_, the optimal parameters are found via the _inner-CV_ on the train set of the _outer-CV_, and the performance is assessed on the remaining test fold of the _outer-CV_ Therefore, in _Nested CV_, parameter optimization and performance assessment are performed on different _unseen_ data.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. Data loading and visualization\n", "\n", "Download the data from the competition link and unzip it." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# we display the description of the features\n", "!cat data/kaggle_data/features.txt" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "feature_data = pd.read_csv('data/kaggle_data/features.txt', header=None, sep=\" \",\n", " names=['feature_names', 'feature_description'])\n", "feature_data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "** Now, let's load and look at the distribution of number of shares (output). **" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "target_data = pd.read_csv('data/kaggle_data/train-targets.csv', sep=\",\")\n", "target_data.head(5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "y_tr = target_data['Prediction'].values\n", "plt.hist(y_tr,bins=2000)\n", "plt.xlim((0,10000))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "** Now, let's load and visualize the features. **" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "list_feature_names = list(feature_data['feature_names'])\n", "\n", "train_data = pd.read_csv('data/kaggle_data/train.csv', header=None, sep=\" \", names=list_feature_names)\n", "train_data.head(5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_data = pd.read_csv('data/kaggle_data/test-val.csv', header=None, sep=\" \", names=list_feature_names)\n", "test_data.head(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us use visualizations to explore the relationships between pairs of features, and between a feature an the output:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pandas import plotting\n", "plotting.scatter_matrix(train_data.get([\"nb_words_content\", \"pp_uniq_words\"]), alpha=0.2,\n", " figsize=(6, 6), diagonal='kde')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import seaborn as sns\n", "sns.set_style('whitegrid')\n", "\n", "sns.jointplot(\"nb_words_content\", \"pp_uniq_words\", data = train_data, \n", " kind='reg', height=6, space=0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "sns.jointplot(train_data[\"pp_neg_words\"], target_data['Prediction'], \n", " kind='reg', ylim=5000, height=6, space=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "** Question: ** Change the features you display to explore relationships. What conclusions are you drawing from this exploratory analysis? Are you going to keep all the features in your predictors?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Answer:__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. Data transformation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.1 feature engineering\n", "This notion includes all kinds of manual modification and creation of features. All are of course problem dependant.\n", "\n", "* __Encoding categorical features:__ if a K-categorical feature is not ordered (categorie 1 is as far to categorie 2 as to categorie 3 etc), then it must not be encoded by a single integer specifying the categorie. We can encode such feature by creating K-1 binary features encoding the belonging to k-th category. (see [link](http://scikit-learn.org/stable/modules/preprocessing.html#encoding-categorical-features))\n", "\n", "* __Feature binarization:__ some continuous features can gain predictive power when binarized. For exemple, in some prediction tasks, weekdays could be split into $working\\ days$ and $not\\ working\\ days$. (see [link](http://scikit-learn.org/stable/modules/preprocessing.html#binarization))\n", "\n", "* __Imputation of missing values:__ there are multiple strategies to input missing values when required (see [link](http://scikit-learn.org/stable/modules/preprocessing.html#imputation-of-missing-values)).\n", "\n", "* __Dealing with time features or other periodic features:__ when considering the hour of the day as a feature, we can't encode it by the an integer between 1 and 24 as midnigth is as close to 11pm to 1am. An easy strategy to encode periodic features is to apply this transformation $x \\mapsto \\sin(\\frac{2\\pi x}{T})$ (T is the period). In the case of the hour of the day, it is $x \\mapsto \\sin(\\frac{2\\pi x}{24})$. \n", "\n", "* __Generating new features:__ you might want to combine the existing features into new ones that seem informative to you. It can be useful for exemple, notably when working with linear models, to generate polynomial features from the original ones. You can also use external data to transform your features; for instance, if one feature is a date, adding a feature that qualifies whether the day is a working day, a weekday or a holiday can be useful. \n", "* ...\n", "\n", "In many practical cases, feature engineering is the key to obtaining a huge improvement in performance." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "** Question: ** How do you want to engineer the features of the challenge (first, you can start encoding the categorical features)? Keep thinking of this question all along the challenge." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us encode weekdays as a categorical feature rather than a periodic one. Remember this transformation later in the challenge: does it help your performance?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Get the weekday data and encode it using a dummy categorical encoding\n", "weekday_data = pd.get_dummies(train_data['weekday'], prefix='weekday', drop_first=True)\n", "\n", "# Get the rest of the data\n", "other_data = train_data.drop(['weekday'], axis=1)\n", "\n", "# Create a new data set by concatenation of the new weekday data and the old rest of the data\n", "training_data = pd.concat([weekday_data, other_data], axis=1)\n", "\n", "# Print the created training data.\n", "training_data.head(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Question:__ Repeat the process for the other categorical variable(s) in your data. Do not forget to apply your transformation to the test dataset as well!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.2 Preprocessing data: standardization\n", "You might want to consider standardizing your data as seen in Lab 02." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.3 Unsupervised projection\n", "If your number of features is high, it may be useful to reduce it with an unsupervised step prior to supervised steps. \n", "\n", "We have already worked on a widly used dimentionality reduction method in `Lab 1`, the Principal Component Analysis. \n", "\n", "We will discuss in `Lab 5` the combinaison of dimentionality reduction and a predictor." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.4 Feature selection\n", "See [scikit-learn's documentation](http://scikit-learn.org/stable/modules/feature_selection.html).\n", "\n", "It may be useful to select a restricted number of important features to increase their predictive power. When the number of feature is particularly bigger than the number od instance, this issue of major importance.\n", "\n", "Multiple strategies can be considered depending on the problem such like:\n", "* considering the most varying features, condering the most correlated features to the output etc\n", "* using feed forward selection procedure: recursively adding features one by one by incresing improvement of performance\n", "* using embbeded feature selection like lasso or ElasticNet (see lab 5)\n", "* computing feature importance (via bagging procedure like [randomized lasso](https://stat.ethz.ch/~nicolai/stability.pdf) or bagging trees (see lab 5) for exemple) and thresholding the feature.\n", "* ..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 3. Model evaluation and model selection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.1 Our first classifier: Gaussian Naive Bayes\n", "Documentation: http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html \n", "\n", "In order to start thinking about model evaluation and model selection, we will convert the regression problem of the KaggleInClass challenge into a classification task in order to to work with the first classifier we studied in class: the Gaussian Naive Bayes.\n", "\n", "Our goal here is to try to classify points between astonishingly and not astonishingly shared articles. Based on the distribution of the number of shares, the separation can be set at 1800 shares." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Transform output into a classification task.\n", "y_clf = np.where(y_tr >= 1800, 1, 0)\n", "print(np.where(y_clf==0)[0].shape)\n", "print(np.where(y_clf==1)[0].shape)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_clf = training_data.values" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# import Gaussian Naive Bayes\n", "from sklearn.naive_bayes import GaussianNB" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# create a Gaussian Naive Bayes classifier i.e. an instance of GaussianNB\n", "gnb = GaussianNB()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# fit the classifier to the data\n", "gnb.fit(X_clf, y_clf)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# predict on the same data\n", "y_pred = gnb.predict(X_clf)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# compute the number of mislabeled articles\n", "print(\"Number of mislabeled points out of a total %d points : %d\" % \\\n", " (X_clf.shape[0], (y_clf != y_pred).sum()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note than all predictors implemented in the sklearn library are trained and applied via the `fit` and `predict` (or `predict_proba`) methods.\n", "\n", "**Question:** What are the parameters of the model we have trained? How many of them are they? How can you access them?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Hint\n", "gnb.__dict__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.2 Model Evaluation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You must have a look at http://scikit-learn.org/stable/modules/model_evaluation.html which shows and details a list of metrics for evaluating regression or classification models." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the case of regression, the most commonly used metrics are :\n", "* `mean squared errors`\n", "* `mean absolute errors` which gives less importance to errors of very bad prediction and more importance to errors of good predictions as the following plot shows than `mean squared errors`\n", "* `R2` (coefficient of determination) which provides a measure of how well future samples are likely to be predicted by the model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = np.arange(-2,2,0.01)\n", "plt.plot(x,x*x, 'blue', label='$x^2$')\n", "plt.plot(x,abs(x), 'orange', label='|x|')\n", "plt.legend(loc=\"upper center\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the case of classification, lots of metrics are used depending on the considered problem:\n", "\n", "* `accuracy` is a default performance measure computing the proportion of missclassified tested instances\n", "* `sensitivity` or 'true positive rate' is the proportion of well classified positive samples\n", "* `specificity` or 'true negative rate' is the proportion of well classified negative samples\n", "\n", "* `precision` is the ability of the classifier not to label as positive a sample that is negative. Like in the case of cancer, we really want to avoid diagnose a cancer to somebody who does not have one.\n", "* `recall` is the ability of the classifier to find all the positive samples.\n", " \n", "* `the area under the precision-recall curve`\n", "* `the area under the Receiver operating characteristic (ROC) curve` " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question** Use the sklearn library to compute the accuracy score of the above prediction." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn import metrics\n", "# Score the predictions\n", "print(\"Accuracy: %.3f\" % # TODO\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Building an ROC curve requires to use the probability estimates for the test data points *before* they are thresholded." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Predict probability estimates instead of 0/1 class labels\n", "y_prob = gnb.predict_proba(X_clf)\n", "print(y_prob.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question:** `y_prob` returns two values for each data point because it returns one probability estimate per class for each data point. The order in which the classes appear are given by `gnb.classes_ `. How do you get the 1-dimensional array that only contains the estimated probability for each point to belong to the positive class?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "pos_index = list(gnb.classes_).index(1)\n", "\n", "# ROC curve\n", "fpr, tpr, thresholds = metrics.roc_curve(y_clf, y_prob[:, pos_index], pos_label=1)\n", "\n", "# Area under the ROC curve\n", "auc = metrics.auc(fpr, tpr)\n", "\n", "# Plot the ROC curve\n", "plt.plot(fpr, tpr, '-', color='orange', label='AUC = %0.3f' % auc)\n", "\n", "plt.xlabel('False Positive Rate', fontsize=16)\n", "plt.ylabel('True Positive Rate', fontsize=16)\n", "plt.title('ROC curve: Gaussian Naive Bayes', fontsize=16)\n", "plt.legend(loc=\"lower right\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Question:** What is it problematic to have evaluated our classifier on the training data? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Answer:__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.3 Model Selection: cross-validation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will now use the function `make_Kfolds` you have implemented in the first section to evaluate the accuracy of your model via a 5-fold cross-validation scheme. We will compare the results you obtained with those you get with scikit-learn's implementation of the cross-validation scheme. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Set up a cross-validation with make_Kfolds\n", "perso_folds = # TODO\n", "\n", "# Set up a cross-validation with sklearn\n", "sk_folds = # TODO" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Assess performance using the cross_validate function you have implemented\n", "# On perso_folds\n", "gnb = GaussianNB()\n", "y_prob_cv_perso = # TODO use cross_validate and perso_folds\n", "print(\"Your own cv-scheme: Accuracy: %.3f\" % #TODO\n", " )\n", "\n", "# On sk_folds\n", "gnb = GaussianNB()\n", "y_prob_cv_sk = # TODO use cross_validate and perso_folds\n", "print(\"Sklearn cv-scheme: Accuracy: %.3f\" % # TODO\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will now plot the ROC curve corresponding to your predictions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Compute the ROC curve corresponding to the y_prob_cv_perso predictions\n", "fpr, tpr, thresholds = metrics.roc_curve(y_clf, y_prob_cv_perso, pos_label=1)\n", "\n", "# Area under the ROC curve\n", "auc = metrics.auc(fpr, tpr)\n", "print(\"Your own cv-scheme: AUROC: %.3f\" % auc)\n", "\n", "# Plot the ROC curve\n", "plt.plot(fpr, tpr, '-', color='orange', label='AUC = %0.3f' % auc)\n", "\n", "# TODO: plot in blue the ROC curve corresponding to the y_prob_cv_sk predictions\n", "\n", "plt.xlabel('False Positive Rate', fontsize=16)\n", "plt.ylabel('True Positive Rate', fontsize=16)\n", "plt.title('ROC curve: Gaussian Naive Bayes', fontsize=16)\n", "plt.legend(loc=\"lower right\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Question:__ The `sklearn.cross_validation` module provides some utilities to make cross-validated predictions. Compare the results you obtained to what they return.\n", "\n", "Documentation: [cross_val_score](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gnb = GaussianNB()\n", "skf = model_selection.StratifiedKFold(5, shuffle=True, random_state=91)\n", "\n", "# Use model_selection.cross_val_score to compute the average cross-validated roc_auc score \n", "# of gnb on (X_clf, y_clf), using the skf iterator.\n", "cv_aucs = # TODO\n", "\n", "print(np.mean(cv_aucs))\n", "\n", "# Note that averaging the AUCs obtained over 10 folds is not the same as \n", "# globally computing the AUC for the predictions made within the cross-validation loop." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Question__ Compare scikit-learn's implementation of the cross-validation with yours." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gnb = GaussianNB()\n", "skf = model_selection.StratifiedKFold(5, shuffle=True, random_state=91)\n", "\n", "# Compute the cross-validation accuracy using model_selection.cross_val_predict\n", "y_pred_sk = model_selection.cross_val_predict(# TODO\n", " )\n", "print(\"Cross-validated accuracy: %.3f\" % metrics.accuracy_score(# TODO\n", " ))\n", "\n", "# Compute the cross-validation accuracy using your own cross_validate function\n", "y_prob_cv = cross_validate(# TODO\n", " )\n", "# Transform y_prob_cv into a vector of binary predictions\n", "y_pred_perso = # TODO \n", "print(\"Cross-validated accuracy: %.3f\" % metrics.accuracy_score(# TODO\n", " ))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Question:__ Does stratifying the cross-validation make a difference?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Answer:__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Getting started on the final project\n", "\n", "You will be evaluated on a final project in the form of a data challenge competition hosted on codalab.\n", "The data challenge is not public: you can access it through the following link: https://codalab.lisn.upsaclay.fr/competitions/755?secret_key=95ee48c1-fc46-406f-b08b-25a7884d2a61\n", "\n", "- Register on the plateform\n", "- Download the starting kit. It contains the data, a README file, and a script to help with creating the submission bundle.\n", "- Read the README file.\n", "- Run the submission script, and submit the submission bundle.\n", "\n", "You are all set to start working on the final project! Choose to create teams of two or work on your own, and get started training models!" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.3" } }, "nbformat": 4, "nbformat_minor": 2 }