HARNESSING GPU POWER FOR MACHINE LEARNING
DOI:
https://doi.org/10.26577/jpcsit2023v1i4a5Keywords:
GPU, Machine learning, Linear regression, Support vector machine regressionAbstract
In the rapidly evolving landscape of data science and machine learning, the need for high-speed, efficient data processing is more critical than ever. This article delves into the pivotal role of graphics processing units (GPUs) in transforming the realm of data analytics and machine learning. GPUs, originally designed for rendering graphics in video games, have emerged as powerhouse tools in scientific computing due to their ability to perform parallel computations swiftly and effectively.
The work conducted a study and comparison of the performance of machine learning algorithms using the Scikit-Learn and RAPIDS cuML libraries on a GPU. Testing was carried out on various data volumes and the results confirmed significant execution speedup when using RAPIDS cuML. This highlights the practical importance of GPU acceleration for processing large data sets. In addition, the developed algorithms were successfully applied to predict the oil recovery factor based on the Buckley-Leverett mathematical model, demonstrating their effectiveness in the oil and gas industry. Overall, this article serves as a comprehensive overview of the current state and future prospects of GPU utilization in data processing and machine learning, providing valuable insights for both practitioners and researchers in the field.