Cover Image for Data Infra Meetup
Cover Image for Data Infra Meetup
Avatar for Alluxio
Presented by
Alluxio
195 Going
Register to See Address
Sunnyvale, California
Registration Closed
This event is not currently taking registrations. You may contact the host or subscribe to receive updates.
About Event

**********************************************************************

Our capacity is full. We are not accepting in-person registration anymore. If you have missed it, register for live streaming using this link: https://us06web.zoom.us/webinar/register/WN_4kFJ534vRWqIW1JLumohJg#/. You will also get the slides and recordings.

**********************************************************************

Join leading data infrastructure experts on January 25th, 2024 for the Data Infra Meetup hosted by Alluxio & Uber! This is a premier opportunity to connect with developers and researchers pushing the boundaries of data infrastructure.

This hybrid meetup will be held in Uber Sunnyvale office and *live-streamed. You will hear talks by technical leaders from Uber, ByteDance, CMU, and Alluxio. Speakers will share insights and real-world examples about optimizing data pipelines, accelerating queries, designing scalable architectures, and more.

Immerse yourself with learning, networking, and conversations. Food and drinks are on us! 🥂💃🏻

Here's the exciting lineup of talks and speakers for the night:

  • 3:30 - 4:00 pm | Registration, Networking & Happy Hour 👯🍻

  • 4:00 - 4:05 pm | Welcome & Opening Remark (Bin Fan, Chief Architect & VP of Open Source @ Alluxio)

  • 4:05 - 4:35 pm | Uber’s Data Storage Evolution (Jing Zhao, Principal Engineer @ Uber) 🚕

  • 4:35 - 4:50 pm | Accelerate Your Trino/Presto Queries – Gain the Alluxio Edge (Jingwen Ouyang, Product Manager @ Alluxio) 📈

  • 4:50 - 5:20 pm | ByteDance’s Native Parquet Reader - (Shengxuan Liu, Software Engineer @ ByteDance) 🔍

  • 5:20 - 5:30 pm | Break Time

  • 5:30 - 6:00 pm | FIFO Queues Are All You Need for Cache Eviction (Juncheng Yang, PhD Student @ CMU) 📄

  • 6:00 - 6:30 pm | Accelerate Distributed PyTorch/Ray Workloads in the Cloud (Chunxu Tang & Siyuan Sheng, Alluxio) 🤖

  • 6:30 - 7:00 pm | MACARON: Multi-cloud/region Aware Cache Auto-ReconfiguratiON (Hojin Park, PhD Student @ CMU) ☁️

  • 7:00 - 8:00 pm | Happy Hour Continued 🍻

Space is limited so register fast to secure your spot! Registration for in-person will close on Sunday 1/21.

*This registration is for in-person only. If you are unable to attend in person, please register for the live-stream here: https://us06web.zoom.us/webinar/register/WN_4kFJ534vRWqIW1JLumohJg#/

Below are the details of each presentation:

Uber’s Data Storage Evolution

Uber builds one of the biggest data lakes in the industry, which stores exabytes of data. In this talk, we will introduce the evolution of our data storage architecture, and delve into multiple key initiatives during the past several years. Specifically, we will introduce 1) our on-prem HDFS cluster scalability challenges and how we solved them, and 2) our efficiency optimizations that significantly reduced the storage overhead and unit cost without compromising reliability and performance, and 3) the challenges we are facing during the ongoing Cloud migration and our solutions.

Accelerate Your Trino/Presto Queries – Gain the Alluxio Edge

In this session, Jingwen will present an overview of using Alluxio Edge caching to accelerate Trino or Presto queries. She will offer practical best practices for using distributed caching with compute engines. The session will feature insights from real-world examples.

ByteDance’s Parquet Reader

Shengxuan Liu from ByteDance will present the new ByteDance’s native Parquet Reader. The talk covers the architecture and key features of the Reader, and how the new Reader is able to facilitate data processing efficiency.

FIFO Queues Are All You Need for Cache Eviction

As a cache eviction algorithm, FIFO has a lot of attractive properties, such as simplicity, speed, scalability, and flash-friendliness. The most prominent criticism of FIFO is its low efficiency (high miss ratio). In this talk, I will describe a simple, scalable FIFO-based algorithm with three static queues (S3-FIFO). Evaluated on 6594 cache traces from 14 datasets, we show that S3- FIFO has lower miss ratios than state-of-the-art algorithms across traces. Moreover, S3-FIFO’s efficiency is robust — it has the lowest mean miss ratio on 10 of the 14 datasets. FIFO queues enable S3-FIFO to achieve good scalability with 6× higher throughput compared to optimized LRU at 16 threads. Our insight is that most objects in skewed workloads will only be accessed once in a short window, so it is critical to evict them early (also called quick demotion). The key of S3-FIFO is a small FIFO queue that filters out most objects from entering the main cache, which provides a guaranteed demotion speed and high demotion precision.

Accelerate Distributed PyTorch/Ray Workloads in the Cloud

In this session, cloud optimization specialists, Siyuan and Lu, will break down the challenges and present a fresh architecture designed to optimize I/O across the data pipeline, ensuring GPUs function at peak performance. The integrated solution of PyTorch/Ray + Alluxio + S3 offers a promising way forward, and the speakers will delve deep into its practical applications. Attendees will not only gain theoretical insights but will also be treated to hands-on instructions and demonstrations of deploying this cutting-edge architecture in Kubernetes, specifically tailored for Tensorflow/PyTorch/Ray workloads in the public cloud.

MACARON: Multi-cloud/region Aware Cache Auto-ReconfiguratiON

The increasing demand for multi-cloud and multi-region data access brings forth challenges related to high data transfer costs and latency. In response, we introduce Macaron, an auto-configuring cache system designed to minimize cost for remote data access. A key insight behind Macaron is that cloud cache sizes are tied to cost limitations, not hardware limits, shifting the way we have been thinking about cache design and eviction policies. Macaron dynamically configures cache size and storage type mix, adapting to workload changes and often utilizing object storage as a cost-efficient option for most cache contents. We demonstrate that Macaron can reduce multi-cloud workload costs by 92% and multi-region costs by 88%, mainly by reducing outgoing data transfer.

By registering, you agree with Alluxio's privacy policy and terms of entering the Uber office.

Location
Please register to see the exact location of this event.
Sunnyvale, California
Avatar for Alluxio
Presented by
Alluxio
195 Going