{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Lab 2: Feature Processing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature standardization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `vinho verde` data set contains physico-chemical information on a number of Portuguese wines, as well as their rating by human tasters. \n", "\n", "Our goal is to use these data to automatically predict the rating of a wine, so as to assist oenologists, improve wine production, and target the taste of niche consumers.\n", "\n", "This data set has been made available on the UCI archive repository (it is one of the oldest and most well-known repository of ML problems).\n", "\n", "It is available from: http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/ (but already in your repository; we will focus on white wines here)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data = pd.read_csv('data/winequality-white.csv', sep=\";\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "type(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have loaded the data in a _pandas DataFrame_ object. Let us examine what information is available:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.head(n=5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data contains 12 columns. The first 10 (fixed acidity -- alcohol) are physico-chemical features of the wines; the last one is their rating (or quality).\n", "\n", "Let us extract from this data a numpy array that contains the design matrix X:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X = data.values[:, :-1]\n", "print(X.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Question:__ Extract from this data a one-dimensional numpy array that contains the labels y." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y = data['quality']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us now plot a histogram of the values taken by each of our features:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pylab inline" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# create a figure of size 16x12\n", "fig = plt.figure(figsize=(16, 12))\n", "\n", "for feat_idx in range(X.shape[1]):\n", " # create a subplot in the (feat_idx+1) position of a 3x4 grid\n", " ax = fig.add_subplot(3, 4, (feat_idx+1))\n", " # plot the histogram of feat_idx\n", " h = ax.hist(X[:, feat_idx], bins=50, color='steelblue', edgecolor='none')\n", " # use the name of the feature as a title for each histogram\n", " ax.set_title(data.columns[feat_idx], fontsize=14)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Question:__\n", "What are the ranges of values taken by the different features? What do you think is going to happen when one computes the euclidean distance between two samples: will the `free sulfur dioxide` be accounted for in a manner similar to the `sulphates`? How is this going to affect the k-nearest-neighbor algorithm?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Answer:__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5-nearest-neighbor prediction\n", "We will now see how to use scikit-learn to split the data between a train and a test set, train a nearest neighbor regressor on the training data, and evaluate its performance on the test set." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Splitting the data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn import model_selection\n", "\n", "X_train, X_test, y_train, y_test = \\\n", " model_selection.train_test_split(X, y,\n", " test_size=0.3 # 30% des données dans le jeu de test\n", " )" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Creating a 5 nearest neighbor regressor" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn import neighbors" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = neighbors.KNeighborsRegressor(n_neighbors=5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Training the 5-NN regressor on the training data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.fit(X_train, y_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Making predictions with the trained model" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y_pred = model.predict(X_test)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Compute the RMSE between the predictions and true value\n", "from sklearn import metrics\n", "np.sqrt(metrics.mean_squared_error(y_test, y_pred))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Feature standardization" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn import preprocessing\n", "\n", "# Create a standardizer object and fit it to the training data.\n", "std_scale = preprocessing.StandardScaler().fit(X_train)\n", "\n", "# Apply the standardization to the training and the test data.\n", "X_train_std = std_scale.transform(X_train)\n", "X_test_std = std_scale.transform(X_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Question:__ Why did we fit the standardizer (i.e. computed the mean and standard deviation for each feature) on the training set only?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Answer:__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Question:__ Visualize the scaled data again to check that the standardization had the intended effect." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Effect of the feature standardization on the model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Question:__ Train a new model on the standardized data. Is it better than the one trained on non-standardized data? " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Categorical features" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will work with a data set that describes mushrooms according to the shape of their cap and stalk, their odor, the type of their veil, etc. This data set also contains information on whether a mushroom is edible or not, and that is what we will try to predict." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Data are available as `data/mushrooms.csv`. Let us load them in a pandas DataFrame called `df`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = pd.read_csv('data/mushrooms.csv')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us look at the first few lines of df" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, the features are encoded as _letters_. Each letter correspond to a category . For example, for the `cap shape` feature, `b` corresponds to a bell cap, `c` to a conical cap, `f` to a flat cap, `k` to a knobbed cap, `s` to a sunken cap, and `x` to a convex cap. For more details about their meaning, you can consult [the documentation of the data set](https://archive.ics.uci.edu/ml/datasets/Mushroom)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Direct conversion to numerical attributes\n", "In order to work with this data, we need to convert the categorical attributes into numerical values. Here we will simply convert each letter to a number between 0 and the number of categories, using scikit-learn's [preprocessing.LabelEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn import preprocessing\n", "\n", "labelencoder = preprocessing.LabelEncoder()\n", "for col in df.columns:\n", " df[col] = labelencoder.fit_transform(df[col])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### One-hot encoding" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This encoding is not necessarily the best, as (for example), an algorithm that uses the Euclidean distance will consider that a convex cap (`x` converted to 5) is closer to a sunken cap (`s` converted to 4) than to a conical cap (`c` converted to 1), and the [one-hot encoding](http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-categorical-features) is a good alternative. However, it has the drawback of increasing the number of features, and of creating correlated features." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load the data again\n", "#df = pd.read_csv('data/mushrooms.csv')\n", "\n", "ohe_encoder = preprocessing.OneHotEncoder()\n", "X = ohe_encoder.fit_transform(df[df.columns])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X.toarray()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.3" } }, "nbformat": 4, "nbformat_minor": 2 }