{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Memmapping" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "The numpy package makes it possible to memory map large contiguous chunks of binary files as shared memory for all the Python processes running on a given host:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "import numpy as np" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "* Creating a `numpy.memmap` instance with the `w+` mode creates a file on the filesystem and zeros its content. " ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n" ] } ], "source": [ "# Cleanup any existing file from past session (necessary for windows)\n", "import os\n", "\n", "current_dir = os.path.abspath(os.path.curdir)\n", "mmap_filepath = os.path.join(current_dir, 'files', 'small.mmap')\n", "if os.path.exists(mmap_filepath):\n", " os.unlink(mmap_filepath)\n", "\n", "mm_w = np.memmap(mmap_filepath, shape=10, dtype=np.float32, mode='w+')\n", "print(mm_w)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "* This binary file can then be mapped as a new numpy array by all the engines having access to the same filesystem. \n", "* The `mode='r+'` opens this shared memory area in read write mode:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n" ] } ], "source": [ "mm_r = np.memmap('files/small.mmap', dtype=np.float32, mode='r+')\n", "print(mm_r)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 42. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n" ] } ], "source": [ "mm_w[0] = 42\n", "print(mm_w)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 42. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n" ] } ], "source": [ "print(mm_r)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "* Memory mapped arrays created with `mode='r+'` can be modified and the modifications are shared \n", " - in case of multiple process" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "mm_r[1] = 43" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 42. 43. 0. 0. 0. 0. 0. 0. 0. 0.]\n" ] } ], "source": [ "print(mm_r)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Memmap Operations" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Memmap arrays generally behave very much like regular in-memory numpy arrays:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "85.0\n", "sum=85.0, mean=8.5, std=17.0014705657959\n" ] } ], "source": [ "print(mm_r.sum())\n", "print(\"sum={0}, mean={1}, std={2}\".format(mm_r.sum(), \n", " np.mean(mm_r), np.std(mm_r)))" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Before allocating more data let us define a couple of utility functions from the previous exercise (and more) to monitor what is used by which engine and what is still free on the cluster as a whole:" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "* Let's allocate a 80MB memmap array:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "memmap([ 0., 0., 0., ..., 0., 0., 0.])" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Cleanup any existing file from past session (necessary for windows)\n", "import os\n", "if os.path.exists('files/big.mmap'):\n", " os.unlink('files/big.mmap')\n", "\n", "np.memmap('files/big.mmap', shape=10 * int(1e6), dtype=np.float64, mode='w+')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "No significant memory was used in this operation as we just asked the OS to allocate the buffer on the hard drive and just maitain a virtual memory area as a cheap reference to this buffer.\n", "\n", "Let's open new references to the same buffer from all the engines at once:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 393 µs, sys: 577 µs, total: 970 µs\n", "Wall time: 773 µs\n" ] } ], "source": [ "%time big_mmap = np.memmap('files/big.mmap', dtype=np.float64, mode='r+')" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "data": { "text/plain": [ "memmap([ 0., 0., 0., ..., 0., 0., 0.])" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "big_mmap" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "* Let's trigger an actual load of the data from the drive into the in-memory disk cache of the OS, this can take some time depending on the speed of the hard drive (on the order of 100MB/s to 300MB/s hence 3s to 8s for this dataset):" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 39.4 ms, sys: 89.6 ms, total: 129 ms\n", "Wall time: 602 ms\n" ] }, { "data": { "text/plain": [ "memmap(0.0)" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%time np.sum(big_mmap)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "* Now back into memory" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 16.6 ms, sys: 2.2 ms, total: 18.8 ms\n", "Wall time: 16.3 ms\n" ] }, { "data": { "text/plain": [ "memmap(0.0)" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%time np.sum(big_mmap)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Example of practical use of this approach" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "This strategy makes it very interesting to load the readonly datasets of machine learning problems, especially when the same data is reused over and over by concurrent processes as can be the case when doing learning curves analysis or grid search (**Hyperparameter Optimisation** & **Model Selection**).\n", "\n", "This is of great importance in case of multiple and **embarassingly** parallel processes (like **Grid Search**)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Memmaping Nested Numpy-based Data Structures with Joblib" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "**joblib** is a utility library included in the **sklearn** package. Among other things it provides tools to serialize objects that comprise large numpy arrays and reload them as memmap backed datastructures.\n", "\n", "To demonstrate it, let's create an arbitrary python datastructure involving numpy arrays:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "data": { "text/plain": [ "(array([[ 0., 0., 0., 0.],\n", " [ 0., 0., 0., 0.],\n", " [ 0., 0., 0., 0.]], dtype=float32), array([[1, 1, 1, 1],\n", " [1, 1, 1, 1],\n", " [1, 1, 1, 1]]))" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "\n", "class MyDataStructure(object):\n", " \n", " def __init__(self, shape):\n", " self.float_zeros = np.zeros(shape, dtype=np.float32)\n", " self.integer_ones = np.ones(shape, dtype=np.int64)\n", " \n", "data_structure = MyDataStructure((3, 4))\n", "data_structure.float_zeros, data_structure.integer_ones" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "We can now persist this datastructure to disk:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "['files/data_structure.pkl',\n", " 'files/data_structure.pkl_01.npy',\n", " 'files/data_structure.pkl_02.npy']" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.externals import joblib\n", "joblib.dump(data_structure, 'files/data_structure.pkl')" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "-rw-r--r-- 1 valerio staff 267 Jul 21 10:17 files/data_structure.pkl\r\n", "-rw-r--r-- 1 valerio staff 176 Jul 21 10:17 files/data_structure.pkl_01.npy\r\n", "-rw-r--r-- 1 valerio staff 128 Jul 21 10:17 files/data_structure.pkl_02.npy\r\n" ] } ], "source": [ "!ls -l files/data_structure*" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "A memmapped copy of this datastructure can then be loaded:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "(memmap([[ 0., 0., 0., 0.],\n", " [ 0., 0., 0., 0.],\n", " [ 0., 0., 0., 0.]], dtype=float32), memmap([[1, 1, 1, 1],\n", " [1, 1, 1, 1],\n", " [1, 1, 1, 1]]))" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "memmaped_data_structure = joblib.load('files/data_structure.pkl', \n", " mmap_mode='r+')\n", "memmaped_data_structure.float_zeros, memmaped_data_structure.integer_ones" ] } ], "metadata": { "celltoolbar": "Slideshow", "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.4.3" } }, "nbformat": 4, "nbformat_minor": 0 }