"From Algorithms and Models to Real-world AI Integration"
Date: November 4th, 2025
Location: Guanajuato, Mexico. Centro de Investigación en Matemáticas (CIMAT)
Format: Hybrid (In-person & Virtual)
As AI technologies mature, the focus is shifting from model development to their effective deployment in real-world systems. In domains such as robotics, manufacturing, healthcare, and intelligent infrastructure, integrating AI into production environments introduces unique challenges related to system design, latency, reliability, monitoring, and trust.
The 1st CHARAL workshop at MICAI 2025 will provide a forum for researchers and practitioners to explore holistic approaches to engineering and operating AI systems that are robust, explainable, and adaptable in complex, real-world conditions.
We invite researchers and practitioners to submit papers addressing the challenges of deploying AI systems in real-world environments. We welcome both theoretical contributions and practical case studies. The technical paper should be written in English in LNCS Springer style, not exceeding 8 pages. The submissions must not contain author's names or affiliations.
The accepted papers will be published in the Springer LNAI Proceedings, and the author fee amount for presentation at MICAI conference and publication is $5,000 MXN (approximately $300 USD). Submission should be made through the CMT system at:
Submit Your PaperPaper Submission Deadline: September 5th, 2025
Notification of Acceptance: September 19th, 2025
Camera-Ready Deadline: September 26th, 2025
Workshop Date: November 4th, 2025
Architectural patterns, system constraints management, fail-safe design, and graceful degradation in AI pipelines.
Observability instrumentation, logging and tracing, debugging techniques, and performance feedback loops.
Uncertainty estimation, explainability methods, red-teaming, adversarial robustness, and evaluation metrics.
Real-world data pipelines, feedback-driven collection, active learning, and weak supervision techniques.
Adapting large models for latency-sensitive environments, robotics, and manufacturing systems.
Quantization, pruning, distillation, streaming inference, and federated updates for resource-constrained environments.
To be adjusted based on accepted papers and speakers.