Software — Stack — for Massively Geo-Distributed Infrastructures

logo IMT Atlantique logo inria logo LS2N

National initiatives

  • SEMAFOR
    Self Management of Fog Ressources: Cloud systems are often designed with a set of centralized autonomic controllers in order to manage without human intervention different levels of Cloud administration. In Fog Computing, the system is much larger, more heterogeneous, unreliable and highly dynamic. This prevents from building a consistent centralized view to take a control decision. Thus, to automatically operate Fog systems, distributed solutions are needed to orchestrate a possibly large number of small autonomic controllers, each one having a local view of their controllable resources. To address this issue, the laboratories LS2N (IMT Atlantique), LIP6 (Sorbonne Université) and the company Alterway investigate a generic approach and supporting framework for the autonomic management of Fog systems.


  • PICNIC
    Large dataset transfer between datacenters: Large dataset transfer from one datacenter to another is still an open issue. Currently, the most efficient solution is the exchange of a hard drive with an express carrier, as proposed by Amazon with its SnowBall offer. Recent evolutions regarding datacenter interconnects announce bandwidths from 100 to 400 Gb/s. The contention point is not the network anymore, but the applications which centralize data transfers and do not exploit parallelism capacities from datacenters which include many servers (and especially many network interfaces – NIC). The PicNic project addresses this issue by allowing applications to exploit network cards available in a datacenter, remotely, in order to optimize transfers (hence the acronym PicNic). The objective is to design a set of system services for massive data transfer between datacenters, exploiting distribution and parallelisation of networks flows.


  • SLICES-FR
    The aim of the project is to design and build a large infrastructure for experimental research on various aspects of distributed computing, from small connected objects to the large data centres of tomorrow. This infrastructure will allow end-to-end experimentation with software and applications at all levels of the software layers, from event capture (sensors, actuators) to data processing and storage, to radio transmission management and dynamic deployment of edge computing services, enabling reproducible research on all-point programmable networks. SLICE-FR is the french node of the SLICES research infrastructure (see below)



European Initiatives

  • SLICES
    The digital transformation of our societies is enabled by the design, deployment and operation of continuously evolving, complex digital infrastructures. The research community needs a test platform to address significant challenges related to their efficiency, reliability, availability, range, end-to-end latency, security and privacy. The EU-funded SLICES-DS will design SLICES, a Europe-wide test-platform, to support large-scale, experimental research that will provide advanced compute, storage and network components, interconnected by dedicated high-speed links. The main aim of SLICES will be to strengthen the research excellence and innovation capacity of European researchers and scientists in the design and operation of future digital infrastructures.



Past Initiatives

  • VeRDi (2018-2020) - Hélène Coullon (Coordinator)
    VeRDi is an acronym for Verified Reconfiguration Driven by execution. The project aimed at addressing distributed software reconfiguration in an efficient and verified way. The VeRDi project was funded by the French region Pays De La Loire where Nantes is located.


  • SeDuCe (CPER 2015-2019) - Jean-Marc Menaud (Coordinator)
    The SeDuCe project (Sustainable Data Centers: Bring Sun, Wind and Cloud Back Together), aimed to design an experimental infrastructure dedicated to the study of data centers with low energy footprint.


  • Hydda (PIA 2017-2020) - Hélène Coullon, Jean-Marc Menaud
    The HYDDA project aimed to develop a software solution allowing the deployment of Big Data applications (with hybrid design (HPC/CLoud)) on heterogeneous platforms (cluster, Grid, private Cloud) and the orchestration of computation tasks (like Slurm, Nova for OpenStack, or Swarm for Docker). The mains challenges addressed by the project were:

    • How to propose an easy-to-use service to host application components (from deployment to supression) that are both typed Cloud and HPC?
    • How to propose a service that unifies the HPCaaS (HPC as a service) and the Infrastructure as a Service (IaaS) in order to offer on-demand resources and to take into account specificities of scientific applications?
    • How optimize resources usage of these platforms (CPU, RAM, Disk, Energy, etc.) in order to propose solutions at the least cost?


  • GRECO (ANR-16-CE25-0016 2017-2020) - Adrien Lebre
    The goal of the GRECO project was to develop a reference resource manager for cloud of things. The manager should act at the IaaS, PaaS and SaaS layer of the cloud. One of the principal challenges here will consist in handling the execution context of the environment in which the cloud of things operate. Indeed, unlike classical resource managers, connected devices imply to consider new types of networks, execution supports, sensors and new constraints like human interactions. The great mobility and variability of these contexts complexify the modeling of the quality of service. To face this challenge, we proposed new scheduling and data management systems to automatically adapt their behavior to the execution context. Adaptation here requires a modeling of the recurrent cloud of things usages, the modeling of the physical cloud architecture and its dynamic.


  • Bigstorage (2015-2018) - Adrien Lebre
    BigStorage was an European Training Network (ETN) whose main goal was to train future data scientists in order to enable them and us to apply holistic and interdisciplinary approaches for taking advantage of a data-overwhelmed world, which requires HPC and Cloud infrastructures with a redefinition of storage architectures underpinning them – focusing on meeting highly ambitious performance and energy usage objectives.


  • DISCOVERY (Inria Project Lab 2015-2019) - Hélène Coullon, Shadi Ibrahim, Adrien Lebre (Coordinator), Mario Südholt
    The Discovery initiative aimed to overcome the main limitations of the traditional server-centric cloud solutions by revising the OpenStack software in order to make it inherently cooperative.


  • ANR KerStream (2017-2021) - Shadi Ibrahim (Coordinator)
    The KerStream project aimed to address the limitations of Hadoop, and to go a step beyond Hadoop through the development of a new approach, called KerStream, for reliable, stream Big Data processing on clouds. KerStream keeps computation in-memory to ensure the low-latency requirements of stream data computations.


  • Epoc CominLabs laboratory of excellence - Thomas Ledoux, Jean-Marc Menaud (Coordinator)
    With the emergence of the Future Internet and the dawning of new IT models such as cloud computing, the usage of data centers (DC), and consequently their power consumption, increase dramatically. Besides the ecological impact, the energy consumption is a predominant criteria for DC providers since it determines the daily cost of their infrastructure. As a consequence, power management becomes one of the main challenges for DC infrastructures and more generally for large-scale distributed systems. The EPOC project focused on optimizing the energy consumption of mono-site DCs connected to the regular electrical grid and to renewable energy sources.


  • PrivGen (2016-2019) - CominLabs laboratory of excellence - Mario Sudholt (Coordinator)
    The PrivGen project aimed at providing new techniques for making secure and protect the privacy of shared genetic data that is processed using distributed applications, but not only. To do so, PrivGen proposes to develop:

    • new means for the combination of watermarking, encryption and fragmentation techniques to ensure the security and protection of privacy of shared genetic data,
    • a composition theory for security mechanisms that allows the enforcement of security and privacy properties in a constructive manner on the programming level,
    • new service-based techniques for the distributed processing of shared genetic data.


  • Apollo (2017-2020) - Connect Talent - Shadi Ibrahim (Coordinator)