Federated + Continual Learning in ALCHIMIA: from collaborative training to models that stay reliable, continuously creating value

Feb 3, 2026 | blog

Industrial AI creates value only when models can be deployed, trusted, and kept accurate over time. In ALCHIMIA, this is especially important: pilot environments are dynamic, data is generated continuously, and valuable operational knowledge is distributed across different sites.

To address these challenges, the ALCHIMIA review demo presented an integrated Federated Learning + Continual Learning (FL+CL) system that enables (1) cross-site learning without centralizing raw data, and (2) continuous adaptation of models once they run in production.

Why FL + CL?

Federated Learning (FL) allows multiple locations or plants to train a shared model collaboratively while keeping their operational data local. This makes it possible to benefit from data generated elsewhere without having to share it, which is often a strict requirement in industrial settings (confidentiality, IP, governance constraints).

Continual Learning (CL) complements FL by keeping models robust after deployment. As processes evolve, model performance can degrade due to data drift. CL detects such drift and triggers controlled updates so that models remain reliable.

Together, FL+CL provide a lifecycle approach: collaborative learning across sites + continuous validation and updating over time.

What the demo shows

The demo focuses on the operational, end-to-end workflow:

  • The FL+CL infrastructure deployed at both central server and client sides.
  • The end-to-end deployment from the Central Server perspective and from CELSA’s side.
  • Data drift detection on incoming/online data followed by automatic retraining, adapting the global model to the new distribution.
  • Production monitoring and traceability, including MLflow-based visualization of model metrics.

Benefits for ALCHIMIA use-cases

This demo validates an approach that can be replicated across pilots:

  • Shared learning without data sharing: sites benefit from each other while keeping data local, exploiting the hidden insights among them.
  • Models that stay accurate: drift-aware monitoring and retraining prevent silent degradation.
  • Operational trust: monitoring and metrics provide transparency on model behaviour and updates, providing users with the information needed to
  • Reusable architecture: a repeatable deployment pattern that supports scaling to new sites and use cases.