GPU acceleration enables analytics pros, data scientists, and researchers to address some of the world’s most challenging problems up to several orders of magnitude faster than traditional architectures. In previous articles, I covered the rise of GPU technology and introduced you to price/performance gains for visual analytics applications. In this article, I will dive deeper into GPU-powered in-database machine learning and deep learning using Kinetica’s distributed User-Defined Functions (UDF) framework.

GPU Acceleration for Data Science

Today GPU acceleration is being used to ingest billions of streaming records per minute, perform complex calculations and render millions of data points in visualizations in seconds. GPU is also being applied extensively in field of data science for machine learning and deep learning processing. Open framework designs make it easy to incorporate open source and proprietary solutions.

rendering_millions

To fully appreciate the beauty of GPU for machine learning processes, you need to understand what we used to do. Developing and deploying machine learning models used to be a challenging and time-consuming endeavor. These projects required close collaboration between database administrators and data scientists since we had to copy large volumes of data into specialized sandbox environments.

machine learning sandbox

Once data was obtained, compute intensive workloads might run for hours, days or even weeks. Due to the experimental, intense memory and CPU requirements of data mining processes, nothing else could be concurrently running. It was difficult for us to size servers, guestimate project time and worst of all…we might not even find a reliable predictive model in the end.

If we did find and optimize a good machine learning model, deploying and continually updating that model became another phase of the project that involved getting application developers, ETL, data warehouse engineers, reporting professionals or other resources on board to code our routines and use the predictive models. Nothing about these projects was fast or simple.

Today with the advances in GPU computing and Kinetica’s elegant in-database UDF designs, creating and delivering sophisticated machine learning or deep learning intelligence on large data sources with massively parallel, distributed computing capabilities is fast, straight-forward and efficient using a single solution. I wish I would have had these capabilities a long time ago. I spent way too many late nights and weekends at the office waiting for a predictive model runs to complete.

To summarize what GPU brings to data science workloads:

  • For data acquisition, connectors for data-in-motion and at-rest with high-speed ingest make it easier to acquire millions of rows of data across disparate systems in seconds
  • For data persistence, the ability to store and manage multi-structured data types in a single GPU database makes all text, images, spatial and time–series data easily accessible to machine learning and deep learning applications
  • For data preparation, the ability to achieve millisecond response times using popular languages like SQL, C++, Java, and Python makes it easier to explore even the most massive datasets
  • Massively parallel processing makes GPUs optimal for compute-intensive model training workloads on large datasets; minimizes the need for data sampling and expensive, resource-intensive tuning, and makes it possible to achieve performance improvements of 100x on commodity hardware
  • Clustered GPU databases distribute data across multiple database shards enabling parallelized model training for better performance; a scale-out architecture makes it easy to add nodes on demand to improve performance and capacity
  • GPU databases use purpose built, in-memory vector and matrix operations to take full advantage of the parallelization available in modern GPUs
  • Bundling the machine learning framework and deployment within one environment eases deployment and use throughout the organization for rapidly operationalizing intelligence

Now that I’ve shared why GPU technology is splendid for data science, let me show you how to use it. If you would like to follow along and explore GPU-powered UDF’s, you can sign up for a free Kinetica trial at https://www.kinetica.com/trial/ or spin up a one-click instance on Amazon Web Services or Azure.

You can sign up for a free Kinetica trial at https://www.kinetica.com/trial/ .

Introduction to User Defined Functions (UDFs)

User Defined Functions (UDFs) are similar to database stored procedures. Kinetica’s UDF framework democratizes data science by making machine learning, deep learning, and custom predictive functions available to non-technical users within their favorite apps or reporting tools such as Excel, Tableau, TIBCO Spotfire, or Microstrategy.

UDF

In-database UDF capabilities streamline operationalizing advanced analytics processes. Analytics pros and data scientists can efficiently develop and deploy intelligent machine learning or artificial intelligence libraries as UDFs without having to move data. UDFs can receive table data, execute calculations, and save data. With direct access to APIs via UDFs, compute-to-grid analytics can be accomplished with custom code or packaged code.

UDF

Creating and Using User Defined Functions (UDFs)

To create a UDF within Kinetica, navigate to the UDF menu and select New. In the image below, note that two Kinetica UDF functions have already been created. They are listed along with the related commands to use them.

step1

After clicking New, a window will be shown for you to name your UDF function, indicate the type, add your code files and create the UDF within the Kinetica database.

step2

Currently Kinetica supports two types of UDFs:

  1. Distributed – invoked within the database, executes in parallel against each data shard of the specified tables
  2. Non-distributed – invoked externally to the database by making use of the existing Kinetica APIs via a connection URL

A distributed UDF can be passed zero or more input tables and write to zero or more output tables. Either type of UDF can also be passed parameters and results.

Kinetica’s UDFs can be coded in TensorFlow, Caffe, Torch, Python, JAVA, C++, and other popular programming languages. Data can be passed in and out of UDFs using memory-mapped files. In theory, a Kinetica UDF can be written in any language that can access memory-mapped files. Kinetica UDFs could be used with Tableau’s TabPy as an orchestration layer for machine learning. It’s also possible to have a UDF be a shell script or helper that runs another program that can process the files.

To quickly see your Kinetica UDF in action, you can run the proc by clicking the Execute command. Alternatively, you could create a Kinetica dashboard and add a UDF chart type to it as shown below.

step3

After your UDF is deployed within the database, it can be used in SQL query statements or API calls. The next image illustrates the Kinetica UDF named linearRegression, a custom JAVA routine that returns predicted future car sales quantities. You could alternatively the UDF in a query within Tableau or any other application. It really is that easy.

step4

To see a step-by-step recorded demonstration of Kinetica’s UDF functionality, watch the Forecast New Car Sales using Kinetica UDFs video. For more detail about writing and running UDFs in your favorite programming language, please refer to Kinetica’s UDF documentation.

To Learn More

GPU hardware acceleration is revolutionizing high-performance computing. Kinetica’s GPU database is not only redefining what is possible, it is improving the entire data science life-cycle.

In this article, I discussed the benefits of GPU for data science and taught you how to get started creating your own in-database UDFs. If you are interested in learning more about GPU computing analytics, data science and other use cases, I encourage you to explore the resources provided by the market leading GPU database vendor, Kinetica.