Kiran Matty, Director of Product @ Aerospike
Kiran manages ecosystem products that relate to AI/ML infrastructure, SQL access, data streaming, etc. His experience spans big data and ML infrastructure, data security, and analytics. Prior to Aerospike, he was a product manager at Visa, Hortonworks, HPE, and Cisco. His interests include large scale distributed systems and AI/ML.
Add Horsepower to AI/ML streaming Pipeline
Wed Jun 16, 12:15 PM - 12:50 PM, PT
The more time the data science teams spend on model training, the less business value is added because no value is created until that model is deployed in production. Traditional HDD-based systems are not suitable for training, which is very IO intensive due to complex transformations that are involved during data preparation. Moreover, Training is not a one-time process. Trends and patterns in the data keep changing rapidly, hence models need to be retrained to address drift issues to continually improve performance in production. Data scientists often experiment with thousands of models, and speeding up the process has significant business implications.
In this talk, we will cover how you can accelerate an AI/ML pipeline by speeding up data loads using the Aerospike database which leverages its hybrid memory architecture to achieve sub-millisecond read/writes. In a hybrid memory architecture the index is stored in-memory (not persisted), and data is stored on persistent storage (SSD) and read directly from the disk. Disk I/O is not required to access the index. For time-sensitive and high throughput use cases such as fraud detection, you need a transactional database at the edge that can handle high-velocity ingestion and support millions of IOPS. The events are then streamed downstream to your AL/ML platform for training or your inference server for predictions. We will share the reference architecture of a highly performant AI/ML training and inference pipelines consisting of Apache Pulsar, Apache Spark 3.0, Aerospike database, and its Spark and Pulsar connectors. This architecture can be extended to other use cases that demand low latency and high throughput while not blowing your budget.