Research Output
Building a modern data platform based on the data lakehouse architecture and cloud-native ecosystem
  In today’s Big Data world, organisations can gain a competitive edge by adopting data-driven decision-making. However, a modern data platform that is portable, resilient, and efficient is required to manage organisations’ data and support their growth. Furthermore, the change in the data management architectures has been accompanied by changes in storage formats, particularly open standard formats like Apache Hudi, Apache Iceberg, and Delta Lake. With many alternatives, organisations are unclear on how to combine these into an effective platform. Our work investigates capabilities provided by Kubernetes and other Cloud-Native software, using DataOps methodologies to build a generic data platform that follows the Data Lakehouse architecture. We define the data platform specification, architecture, and core components to build a proof of concept system. Moreover, we provide a clear implementation methodology by developing the core of the proposed platform, which are infrastructure (Kubernetes), ingestion and transport (Argo Workflows), storage (MinIO), and finally, query and processing (Dremio). We then conducted performance benchmarks using an industry-standard benchmark suite to compare cold/warm start scenarios and assess Dremio’s caching capabilities, demonstrating a 12% median enhancement of query duration with caching.

  • Date:

    22 February 2025

  • Publication Status:

    Published

  • DOI:

  • Funders:

    Edinburgh Napier Funded

Citation

Âé¶¹ÉçÇø

AbouZaid, A., Barclay, P. J., Chrysoulas, C., & Pitropakis, N. (2025). Building a modern data platform based on the data lakehouse architecture and cloud-native ecosystem. Discover Applied Sciences, 7, Article 166. https://doi.org/10.1007/s42452-025-06545-w

Authors

Keywords

Data Lakehouse, Kubernetes, DataOps, Cloud-Native, Big Data, Artificial Intelligence

Monthly Views:

Available Documents