{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"[Home Page](../START_HERE.ipynb)\n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"[1]\n",
"[2](02-Intro_to_cuDF_UDFs.ipynb)\n",
"[3](03-Cudf_Exercise.ipynb)\n",
" \n",
" \n",
" \n",
" \n",
"[Next Notebook](02-Intro_to_cuDF_UDFs.ipynb)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Intro to cuDF\n",
"=======================\n",
"\n",
"Welcome to first cuDF tutorial notebook! This is a short introduction to cuDF, partly modeled after 10 Minutes to Pandas, geared primarily for new users. cuDF is a Python GPU DataFrame library (built on the Apache Arrow columnar memory format) for loading, joining, aggregating, filtering, and otherwise manipulating data. While we'll only cover some of the cuDF functionality, but at the end of this tutorial we hope you'll feel confident creating and analyzing GPU DataFrames. The tutorial is split into different modules as shown in the list below, based on their individual content and also contain embedded exercises for your practice.\n",
"\n",
"For more information about CuDF refer to the documentation here: https://docs.rapids.ai/api/cudf/stable/\n",
" \n",
"We'll start by introducing the pandas library, and quickly move on cuDF."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Here is the list of contents in the lab:\n",
"- Pandas
A short introduction to the Pandas library, used commonly in data science for data manipulation and convenient storage.\n",
"- CuDF Dataframes
This module shows you how to work with CuDF dataframes, the GPU equivalent of Pandas dataframes, for faster data transactions. It includes creating CuDF objects, viewing data, selecting data, boolean indexing and dealing with missing data.\n",
"- Operations
Learn how to view descriptive statistics, perform string operations, concatenate, joins, append, group data and use applymap.\n",
"- TimeSeries
Introduction to using TimeSeries data format in CuDF \n",
"- Converting Data Representations
Here we will work with converting data representations, including CuPy, Pandas and Numpy, that are commonly required in data science pipelines.\n",
"- Getting Data In and Out
Transfering CuDf dataframes to and from CSV files.\n",
"- Performance
Study the performance metrics of code written using CuDF compared to the regular data science operations and observe the benefits.\n",
"- Sensor Data Example
Apply the fundamental methods of CuDF to work with a sample of sensor data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import numpy as np\n",
"import math\n",
"np.random.seed(12)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Pandas\n",
"\n",
"Data scientists typically work with two types of data: unstructured and structured. Unstructured data often comes in the form of text, images, or videos. Structured data - as the name suggests - comes in a structured form, often represented by a table or CSV. We'll focus the majority of these tutorials on working with structured data.\n",
"\n",
"There exist many tools in the Python ecosystem for working with structured, tabular data but few are as widely used as Pandas. Pandas represents data in a table and allows a data scientist to manipulate the data to perform a number of useful operations such as filtering, transforming, aggregating, merging, visualizing and many more. \n",
"\n",
"For more information on Pandas, check out the excellent documentation: http://pandas.pydata.org/pandas-docs/stable/\n",
"\n",
"Below we show how to create a Pandas DataFrame, an internal object for representing tabular data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd; print('Pandas Version:', pd.__version__)\n",
"\n",
"\n",
"# here we create a Pandas DataFrame with\n",
"# two columns named \"key\" and \"value\"\n",
"df = pd.DataFrame()\n",
"df['key'] = [0, 0, 2, 2, 3]\n",
"df['value'] = [float(i + 10) for i in range(5)]\n",
"print(df)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can perform many operations on this data. For example, let's say we wanted to sum all values in the in the `value` column. We could accomplish this using the following syntax:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aggregation = df['value'].sum()\n",
"print(aggregation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## cuDF\n",
"\n",
"Pandas is fantastic for working with small datasets that fit into your system's memory and workflows that are not computationally intense. However, datasets are growing larger and data scientists are working with increasingly complex workloads - the need for accelerated computing is increasing rapidly.\n",
"\n",
"cuDF is a package within the RAPIDS ecosystem that allows data scientists to easily migrate their existing Pandas workflows from CPU to GPU, where computations can leverage the immense parallelization that GPUs provide.\n",
"\n",
"Below, we show how to create a cuDF DataFrame."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import cudf; print('cuDF Version:', cudf.__version__)\n",
"\n",
"\n",
"# here we create a cuDF DataFrame with\n",
"# two columns named \"key\" and \"value\"\n",
"df = cudf.DataFrame()\n",
"df['key'] = [0, 0, 2, 2, 3]\n",
"df['value'] = [float(i + 10) for i in range(5)]\n",
"df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As before, we can take this cuDF DataFrame and perform a `sum` operation over the `value` column. The key difference is that any operations we perform using cuDF use the GPU instead of the CPU."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aggregation = df['value'].sum()\n",
"print(aggregation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note how the syntax for both creating and manipulating a cuDF DataFrame is identical to the syntax necessary to create and manipulate Pandas DataFrames; the cuDF API is based on the Pandas API. This design choice minimizes the cognitive burden of switching from a CPU based workflow to a GPU based workflow and allows data scientists to focus on solving problems while benefitting from the speed of a GPU!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"# DataFrame Basics with cuDF\n",
"\n",
"In the following tutorial, you'll get a chance to familiarize yourself with cuDF. For those of you with experience using pandas, this should look nearly identical.\n",
"\n",
"Along the way you'll notice small exercises. These exercises are designed to help you get a feel for writing the code yourself, but if you get stuck, you can take a look at the solutions.\n",
"\n",
"Portions of this were borrowed from the 10 Minutes to cuDF guide."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"Object Creation\n",
"---------------"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Creating a `cudf.Series`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s = cudf.Series([1,2,3,None,4])\n",
"print(s)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Creating a `cudf.DataFrame` by specifying values for each column."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = cudf.DataFrame({'a': list(range(20)),\n",
"'b': list(reversed(range(20))),\n",
"'c': list(range(20))})\n",
"print(df)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Creating a `cudf.DataFrame` from a `pd.Dataframe`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pdf = pd.DataFrame({'a': [0, 1, 2, 3],'b': [0.1, 0.2, None, 0.3]})\n",
"gdf = cudf.DataFrame.from_pandas(pdf)\n",
"print(gdf)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"Viewing Data\n",
"-------------"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Viewing the top rows of a GPU dataframe."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(df.head(2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Sorting by values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(df.sort_values(by='b'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"\n",
"Selection\n",
"------------\n",
"\n",
"## Getting data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Selecting a single column, which initially yields a `cudf.Series` (equivalent to `df.a`)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(df['a'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## Selection by Label"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Selecting rows from index 2 to index 5 from columns `a` and `b`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(df.loc[2:5, ['a', 'b']])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Selection by Position"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Selecting via integers and integer slices, like numpy/pandas."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.iloc[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.iloc[0:3, 0:2]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also select elements of a `DataFrame` or `Series` with direct index access."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df[3:5]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s[3:5]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Exercise 1\n",
"\n",
"Try to select only the rows at index `4` and `9` from `df`.\n",
"\n",
"Solution
\n",
" \n",
"
print(df.iloc[[4,9]])\n",
"
\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" \n",
"\n",
"## Boolean Indexing"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Selecting rows in a `DataFrame` or `Series` by direct Boolean indexing."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(df[df.b > 15])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Selecting values from a `DataFrame` where a Boolean condition is met, via the `query` API."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(df.query(\"b == 3\")) "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"val = 3\n",
"df.query(\"b == @val\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also pass local variables to cuDF queries, via the `local_dict` keyword or `@` operator."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"cudf_comparator = 3\n",
"print(df.query(\"b == @cudf_comparator\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Supported logical operators include `>`, `<`, `>=`, `<=`, `==`, and `!=`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Exercise 2\n",
"\n",
"Try to select only the rows from `df` where the value in column `b` is greater than the value in column `c` + 6.\n",
"\n",
"Solution
\n",
" \n",
"
print(df.query(\"b > c + 6\"))\n",
"
\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"Missing Data\n",
"------------"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Missing data can be replaced by using the `fillna` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(s.fillna(999))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Missing data can be dropped by using the `dropna` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(s.dropna())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"Operations\n",
"------------"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Stats"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Calculating descriptive statistics for a `Series`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s = cudf.Series(np.arange(10)).astype(np.float32)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(s.mean(), s.var(), s.std(), s.kurtosis())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(df.describe())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Applymap"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Applying functions to a `Series`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(s.applymap(lambda x: (x ** 2) + (x / 2)))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def add_ten(num):\n",
" return num + 10\n",
"\n",
"print(s.applymap(add_ten))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## String Methods"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Like pandas, cuDF provides string processing methods in the `str` attribute of `Series`. Full documentation of string methods is a work in progress. Please see the [cuDF](https://docs.rapids.ai/api/cudf/nightly/) and [nvStrings](https://docs.rapids.ai/api/nvstrings/nightly/) API documentation for more information."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s = cudf.Series(['A', 'B', 'C', 'Aaba', 'Baca', None, 'CABA', 'dog', 'cat'])\n",
"print(s.str.lower())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To check the number of bytes in each string of the series:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(s.str.byte_count())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To find whether a word or regular expression occurs in the strings of the series:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(s.str.contains('C|dog', regex=True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Exercise 3\n",
"\n",
"Try to convert all the strings to uppercase. Take a look at the nvStrings API documentation linked above if you need some help.\n",
"\n",
"Solution
\n",
" \n",
"
print(s.str.upper())\n",
"
\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Concat"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Concatenating `Series` and `DataFrames` row-wise."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"s = cudf.Series([1, 2, 3, None, 5])\n",
"print(cudf.concat([s, s]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Append"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Appending values from another `Series` or array-like object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(s.append(s))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Join"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Performing SQL style merges. Note that the dataframe order is not maintained, but may be restored post-merge by sorting by the index."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_a = cudf.DataFrame()\n",
"df_a['key'] = ['a', 'b', 'c', 'd', 'e']\n",
"df_a['vals_a'] = [float(i + 10) for i in range(5)]\n",
"\n",
"df_b = cudf.DataFrame()\n",
"df_b['key'] = ['a', 'c', 'e']\n",
"df_b['vals_b'] = [float(i+100) for i in range(3)]\n",
"\n",
"merged = df_a.merge(df_b, on=['key'], how='left')\n",
"print(merged)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Exercise 4\n",
"\n",
"Using the DataFrames we created above, try to do an `inner` join using `merge`.\n",
"\n",
"Solution
\n",
" \n",
"
print(df_a.merge(df_b, on=['key'], how='inner'))\n",
"
\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Grouping"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Like pandas, cuDF supports the [Split-Apply-Combine](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html) groupby paradigm."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df['agg_col1'] = [1 if x % 2 == 0 else 0 for x in range(len(df))]\n",
"df['agg_col2'] = [1 if x % 3 == 0 else 0 for x in range(len(df))]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Grouping and then applying the `sum` function to the grouped data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.groupby('agg_col1').sum()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Grouping and applying statistical functions to specific columns, using `agg`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.groupby('agg_col1').agg({'a':'max', 'b':'mean', 'c':'sum'})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Exercise 5\n",
"\n",
"We can also group by multiple columns at once, which we call grouping hierarchically. Try to group `df` by `agg_col1` and `agg_col2` and calculate the mean of column `a` and minimum of column `b`.\n",
"\n",
"Solution
\n",
" \n",
"
df.groupby(['agg_col1', 'agg_col2']).agg({'a':'mean', 'b':'min'})\n",
"
\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"\n",
"Time Series\n",
"------------\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`DataFrames` supports `datetime` typed columns, which allow users to interact with and filter data based on specific timestamps."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"date_df = cudf.DataFrame()\n",
"date_df['date'] = pd.date_range('11/20/2018', periods=72, freq='D')\n",
"date_df['value'] = np.random.sample(len(date_df))\n",
"print(date_df.head())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Exercise 6\n",
"\n",
"Try to use `query` to filter `date_df` to only those row with a date before `2018-11-23`. This is a bit trickier than the prior exercises. As a hint, you'll want to explore the `to_datetime` function from the `pandas` library.\n",
"\n",
"Solution
\n",
" \n",
"
\n",
" search_date = pd.to_datetime('2018-11-23')\n",
" date_df.loc[date_df.date <= search_date]\n",
" \n",
"
\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also interact with datetime columns to extract things like the day, hour, minute, and more."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"date_df['minute'] = date_df.date.dt.minute\n",
"print(date_df.head())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"Converting Data Representation\n",
"--------------------------------"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## CuPy\n",
"\n",
"Combining cuDF and CuPy is particularly useful (it's the GPU equivalent of combining pandas and NumPy), so we've put together a Getting Started [guide](https://docs.rapids.ai/api/cudf/nightly/10min-cudf-cupy.html). Let's start of by importing cupy and doing data operations"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import cupy as cp"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we will convert cuDF DataFrame to a CuPy ndarray, using DLpack which is the best method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"nelem = 10000\n",
"df = cudf.DataFrame({'a':range(nelem),\n",
" 'b':range(500, nelem + 500),\n",
" 'c':range(1000, nelem + 1000)}\n",
" )\n",
"\n",
"%time arr_cupy = cp.fromDlpack(df.to_dlpack())\n",
"arr_cupy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"For now, note that you can convert a DataFrame or a Series to a CuPy array with `.values`. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.values"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Pandas"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Converting a cuDF `DataFrame` to a pandas `DataFrame`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.head().to_pandas()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## Numpy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Converting a cuDF `DataFrame` to a numpy `ndarray`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df.as_matrix()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Converting a cuDF `Series` to a numpy `ndarray`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(cp.asarray(df['a']))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"Getting Data In/Out\n",
"------------------------\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"## CSV"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Writing to a CSV file, using a GPU-accelerated CSV writer."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if not os.path.exists('example_output'):\n",
" os.mkdir('example_output')\n",
" \n",
"df.to_csv('example_output/foo.csv', index=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Reading from a csv file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = cudf.read_csv('example_output/foo.csv')\n",
"print(df)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That's it! You've got the basics of cuDF down! Let's talk a little bit about the computational performance of cuDF and GPUs."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"\n",
"# Performance"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"One of the primary reasons to use cuDF over pandas is performance. For some workflows, the GPU can be **much** faster than the CPU. Let's illustrate this by starting with a small example: creating a DataFrame and calculating the sum of a column."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"a = np.random.rand(10000000) # 10 million values"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pdf = pd.DataFrame({'a': a})\n",
"cdf = cudf.DataFrame({'a': a})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%timeit\n",
"pdf['a'].sum()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%timeit\n",
"cdf['a'].sum()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Pretty cool! This is a pretty small example, though."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"### A More Realistic Example: Sensor Data Analytics\n",
"\n",
"To get a more realistic sense of how powerful cuDF and GPUs can be, let's imagine you had a fleet of sensors that collect data every millisecond. These sensors could be measuring pressure, temperature, or something else entirely.\n",
"\n",
"Let's imagine we want to analyze one day's worth of sensor data. We'll assign random values for the sensor `value` to use for this example. We'll start by creating the data, and generating some datetime related features like we learned about in the above tutorial."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"date_df = pd.DataFrame()\n",
"date_df['date'] = pd.date_range(start='2019-07-05', end='2019-07-06', freq='ms')\n",
"date_df['value'] = np.random.sample(len(date_df))\n",
"\n",
"date_df['hour'] = date_df.date.dt.hour\n",
"date_df['minute'] = date_df.date.dt.minute\n",
"\n",
"print(date_df.shape)\n",
"date_df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Just creating the data takes a while! Let's do our analysis. From our sensor data, we want to get the maximum sensor value for each minute. Since we don't want to combine values in the same minute of different hours, we'll need to do a hierarchical groupby."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%time results = date_df.groupby(['hour', 'minute']).agg({'value':'max'})\n",
"results.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is fairly slow! Imagine if we had a fleet of sensors. Time would become a serious concern.\n",
"\n",
"Let's try this in cuDF now, using the GPU DataFrame. We'll run the same code as above."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"\n",
"date_df = cudf.DataFrame()\n",
"date_df['date'] = pd.date_range(start='2019-07-05', end='2019-07-06', freq='ms')\n",
"date_df['value'] = np.random.sample(len(date_df))\n",
"\n",
"date_df['hour'] = date_df.date.dt.hour\n",
"date_df['minute'] = date_df.date.dt.minute\n",
"date_df['second'] = date_df.date.dt.second\n",
"\n",
"print(date_df.shape)\n",
"print(date_df.head())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%time results = date_df.groupby(['hour', 'minute', 'second']).agg({'value':'max'})\n",
"results.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The data creation accelerated from 23 s to 2.5 seconds and the processing from 3 s to 40 ms."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Conclusion\n",
"\n",
"Now we are familiar with creating CUDF dataframes, selecting, viewing and manipulating data. The operations are almost the same as pandas, and can easily replace the pandas operations in our traditional data science pipeline. While the results may vary slightly on different GPUs, it should be clear that GPU acceleration can make a significant difference. We can get much faster results with the same code! The next tutorial will give us more control over the data and allow us to play with user defined functions. The kernel function allow us to specify how the data is chunked, thus we can accelerate the row-wise operations with greater control over execution."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Licensing\n",
" \n",
"This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to learn about implementing machine learning algorithms in CuML using CuPy as the data input format, refer to the bonus lab here."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"[1]\n",
"[2](02-Intro_to_cuDF_UDFs.ipynb)\n",
"[3](03-Cudf_Exercise.ipynb)\n",
" \n",
" \n",
" \n",
" \n",
"[Next Notebook](02-Intro_to_cuDF_UDFs.ipynb)\n",
"\n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"[Home Page](../START_HERE.ipynb)"
]
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
}
},
"nbformat": 4,
"nbformat_minor": 4
}