PhD thesis: Scalable data management systems for Big Data
Big Data can be characterized by 3 V’s.
Managing Big Data requires fundamental changes in the architecture of data management systems. Data storage should continue innovating in order to adapt to the growth of data. They need to be scalable while maintaining high performance regarding data accesses.
This thesis focuses on building scalable data management systems for Big Data. More specifically, we focus on Big Volume and Big Velocity.
Our first and second contribution address the challenge of providing efficient support for Big Volume of data in data-intensive high performance computing (HPC) environments. Particularly, we address the shortcoming of existing approaches to handle atomic, non-contiguous I/O operations in a scalable fashion. We propose and implement a versioning-based mechanism that can be leveraged to offer isolation for non-contiguous I/O without the need to perform expensive synchronizations. In the context of parallel array processing in HPC, we introduce Pyramid, a large-scale, array-oriented storage system. It revisits the physical organization of data in distributed storage systems for scalable performance. Pyramid favors multidimensional-aware data chunking, that closely matches the access patterns generated by applications. Pyramid also favors a distributed metadata management and a versioning concurrency control to eliminate synchronizations in concurrency.
Our third contribution addresses Big Volume at the scale of the geographically distributed environments. We consider BlobSeer, a distributed versioning-oriented data management service, and we propose BlobSeer-WAN, an extension of BlobSeer optimized for such geographically distributed environments. BlobSeer-WAN takes into account the latency hierarchy by favoring locally metadata accesses. BlobSeer-WAN features asynchronous metadata replication and a vector-clock implementation for collision resolution.
To cope with the Big Velocity characteristic of Big Data, our last contribution feautures DStore, an in-memory document-oriented store that scale vertically by leveraging large memory capability in multicore machines. DStore demonstrates fast and atomic complex transaction processing in data writing, while maintaining high throughput read access. DStore follows a single-threaded execution model to execute update transactions sequentially, while relying on a versioning concurrency control to enable a large number of simultaneous readers.
_ Towards scalable array-oriented active storage: the pyramid approach. Tran V.-T., Nicolae B., Antoniu G. In the ACM SIGOPS Operating Systems Review 46(1):19-25. 2012.
International conferences and workshops:
_ Pyramid: A large-scale array-oriented active storage system. Tran V.-T., Nicolae B., Antoniu G., Bougé L. In The 5thWorkshop on Large Scale Distributed Systems and Middleware (LADIS 2011), Seattle, September 2011.
_ Efficient support for MPI-IO atomicity based on versioning. Tran V.-T., Nicolae B., Antoniu G., Bougé L. In Proceedings of the 11th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing (CCGrid 2011), 514 - 523, Newport Beach, May 2011.
_ Towards A Grid File System Based On A Large-Scale BLOB Management Service. Tran V.-T., Antoniu G., Nicolae B., Bougé L. In CoreGRID ERCIM Working Group Workshop on Grids, P2P and Service computing, Delft, August 2009.
_ Towards a Storage Backend Optimized for Atomic MPI-I/O for Parallel Scientific Applications. Tran V.-T. In The 25th IEEE International Parallel and Distributed Processing Symposium (IPDPS 2011): PhD Forum (2011), 2057 - 2060, Anchorage, May 2011.
_ DStore: An in-memory document-oriented store. Tran V.-T., Narayanan D., Antoniu G., Boug L. INRIA Research Report No. 8188, INRIA, Rennes, France, 2012.