


Apache Iceberg™ Europe Community Meetup - July 2025 Edition
Apache Iceberg™ Meetup Europe - live in Berlin! 🌍
Join us for the very first Apache Iceberg™ Meetup Europe in Berlin! Our event is hosted in Berlin, co-hosted by Snowflake, Tower , Tech-Europe and Vakamo.
Vakamo invites everyone to this exciting event — register now and follow the Apache Iceberg Meetup Europe Community on lu.ma!
Can't join us in person? No worries—register anyway to join the live stream, receive the event recordings and stay connected with the community.
Also make sure to join our Slack Channel to stay up-to-date with future meetups in Europe!
Agenda
5:30 pm – Registration & Networking
6:30 pm – 1st set of short talks
🌟 Dmytro Koval (Snowflake) – Iceberg Geo: High Performance Geospatial Analytics on Iceberg Tables
🌟 George Zubrienko & Vitalii Savitskii (ECCO) – Why Do We Use a REST-catalogue with Apache Iceberg?
🌟 Serhii Sokolenko (Tower.dev) – Preparing your Agents for the Ice(berg) Age
7:20 pm – Networking break
7:40 pm – 2nd set of short talks
🌟 Maximilian Michels (Apple) – Dynamic, Scalable, and Schema-Evolving: Introducing the Flink Dynamic Iceberg Sink
🌟 Vinnie (Vinicius) Dalpiccol (Staffbase) – From Shards to One: Staffbase's Path to a Cross-Functional Platform
8:10 pm – More Networking
9:15 pm – Event close
Livestream
We are setting up a Livestream for the talks using Zoom. The stream starts at around 18:00 and is available here: TBA
Talks will also be uploaded to YouTube after the event.
Presentations & Speakers
📣 Iceberg Geo: High Performance Geospatial Analytics on Iceberg Tables
The latest Iceberg V3 specification introduces several new data types, including Variant and Geospatial types - areas where Snowflake played an active role in shaping the standard. In this talk, Dmytro will walk you through the design of the Geometry and Geography data types in Iceberg and share key highlights from their implementation.
Dmytro is a Software Engineer at Snowflake, where he focuses on high-performance query processing and efficient data representation. Prior to Snowflake, he built scalable data processing infrastructure at companies such as HERE Technologies and SAP.
🎙️ Why Do We Use a REST-catalogue with Apache Iceberg?
Every data journey has its teachable moments. Along with it come decisions that shape the efficiency and scalability of any organization’s data infrastructure. Our story with Apache Iceberg is no different.
In this talk, we’ll take you through our experiences with Apache Iceberg, focusing on our choice of catalogue solutions. While data catalogues are specified to be interoperable and therefore to some degree also interchangeable, we share why we converged towards a REST-catalogue, what improvements we draw from it and why it might be the right choice for your data journey too
George Zubrienko: I’ve been working around Data Engineering and Data Science for the past 10 years or so, focusing on building data processing platform and tools that support data scientists in their daily job activities. I am a strong believer in responsible open-source (use-and-contribute) and the guy who can help you adopt cutting-edge tech, so you won’t scrap it next year. Outside of my job, I enjoy the company of my wife and 4-year old daughter, watching her grow and open up for the world around
Vitalii Savitskii: Over the past ten years, my career has been primarily focused on the development of platform tools for developers. Now, as a Senior Platform Engineer at ECCO Sko A/S, I handle a broad range of significant responsibilities including software development for the ECCO data platform and managing cloud resources.
🎤 Preparing your AI Agents for the Ice(berg) Age
In a future where AI agents serve billions, they must query up-to-date analytical and operational data to stay grounded in facts. With enterprise analytics shifting to open formats like Apache Iceberg, agents must learn to "speak Iceberg." This unlocks a key superpower: portability. Iceberg-savvy agents can run in the cloud, on-premises, or even on a developer’s laptop—wherever the data lives. That flexibility is crucial as GPU stacks vary wildly across environments. This talk explores how Iceberg and portable runtimes empower fact-grounded agents to operate across diverse hardware and deployment models
Serhii Sokolenko is the CEO and co-founder of Tower.dev, a serverless Python platform that frees data teams from managing complex infrastructure. Previously, he built data platforms at Databricks (secure compute for GPU workloads), Snowflake (search & low-latency), and Google Cloud (Dataflow). His past startups tackled emotion detection in text and brought natural text understanding to LegalTech.
📢 Dynamic, Scalable, and Schema-Evolving: Introducing the Flink Dynamic Iceberg Sink
As modern data platforms shift toward real-time, multi-tenant, and lakehouse architectures, the need for dynamic and flexible data pipelines is growing rapidly. The Flink Dynamic Iceberg Sink is a new connector for Apache Flink that allows users to seamlessly write streaming data into multiple Apache Iceberg tables — dynamically, efficiently, and with full schema evolution support.
While the standard Flink Iceberg sink is limited to writing into a single, predefined table with a static schema, the Dynamic Iceberg Sink removes these constraints. It enables real-time, fine-grained routing of records to different Iceberg tables. Target tables don’t need to be predefined at job submission, which unlocks powerful new patterns such as multi-tenant ingestion, event-driven table management, and adaptive data partitioning.
In addition to routing flexibility, the sink offers automatic schema migration: it detects schema changes in incoming data streams and synchronizes the corresponding Iceberg table schemas as needed. This reduces manual interventions, mitigates schema drift, and ensures data consistency across evolving pipelines. The sink also integrates deeply with Flink’s checkpointing mechanism, offering exactly-once delivery semantics, and transactional writes.
This session will cover the key architectural components, including dynamic table discovery, schema synchronization, and performance optimizations. We’ll share real-world benchmarks, example use cases, and lessons learned from production deployments. Attendees will leave with a clear understanding of how the Flink Dynamic Iceberg Sink can simplify complex pipelines, improve data agility, and bridge the gap between stream processing and modern lakehouse storage.
Maximilian Michels is a software engineer at Apple who loves open-source, distributed systems, and stream processing. Max previously worked on large-scale data processing tools and platforms at Google, Lyft, and Splunk. Max is a PMC member at the Apache Flink project.
🎤 From Shards to One: Staffbase's Path to a Cross-Functional Platform
In this talk, Staff Engineer Vinnie will walk us through how Staffbase, an employee communications platform, incrementally evolved its data platform, from disjointed cron jobs and heterogeneous pipelines, to centralizing data assets within a data lake backed by Apache Iceberg.
Vinnie is a Staff Engineer at Staffbase, helping organizations communicate with their employees. With previous experience ranging from consulting, to global logistics leaders, to scrappy BioTech startups, he is a fan of lean data platforms, open-source, and connecting the dots between engineering and business.
______________________________________________________
