The industrial and academic partners of the Confiance.ai program[1] have jointly developed an engineering methodology and software tools to improve the trustworthiness and safety of AI components and systems in critical applications. We bring our contribution to the AI Action Summit from our experience in these developments and from their application to real-world use cases.
Benefits and opportunities of AI/ML
Firstly, we want to emphasize the benefits obtained from machine learning applications in our industries. From the identification of objects of interest in aerial photographs to the detection of defects in welds, from the counting of products exiting a factory to the collision avoidance of unmanned flying vehicles, many components and technical environments developed during Confiance.ai have contributed to improvements in our processes. These improvements have led to better and faster decisions, resulting in tangible benefits. As a key result, a publicly available engineering methodology is now available (shown in our Body of Knowledge website) conceived to support our engineers (AI engineers, but also safety engineers, systems engineers etc.) in their critical applications developments. Thanks to these improvements, we are convinced that we will be in a better position to leverage the opportunities raised by AI and machine learning.
Industry applications, systems engineering approach, use cases
One key differentiating element of our approach for safe and trustworthy AI lies in its focus on industrial applications Indeed, even if we recognize that the leading AI technical developments in this field address the concerns raised by the general public , we strongly believe that it is of critical importance to promote AI applications in industrial settings. In this respect, and considering the fact that many of our industrial applications can be qualified as high-risk, we promote a professional approach based on best practices developed over the years e.g. in aeronautics, automotive or process industries: requirements and systems engineering, extensive testing before deploying a software in production and usage chain. We have validated the methodology and tools with numerous representative use cases provided by industrials. As an example, a vision application for detecting defaults in weldings of automobile chassis at the Le Mans factory of Renault was subject to many experiments, from data validation, robustness by design, component testing, to explainability issues, human-machine interaction and societal aspects. Other examples of use cases are the ACAS-XU collision avoidance framework for unmanned aerial vehicles by Airbus, demand forecasting for Air Liquide products, object and pedestrian detection for autonomous driving by Valeo, consumer sentiment analysis for Renault marketing team, etc.
Lifecycle approach
As of today, we identify that many AI applications are put in operations without considering their full lifecycle. In circumstances when an AI system is going to be in place for many years, sometimes decades, and often embedded in other objects, it is of crucial importance to consider its full lifecycle from ideation to decommissioning. Some of these AI/ML systems will require data management and learning during operation, which places even more challenges on their safety and trustworthiness. For this reason, and following well-established industrial practices, we developed the Confiance.ai methodology as a W-cycle with clearly identified stages from requirements to operations monitoring, that our engineers can use for their future AI developments and operations.
France at the core of an international network
Since the launch of the program, it was clear that the subject of artificial intelligence of trust went far beyond the single national framework within which Confiance.ai was born: our main industrial players have an international footprint in research and development in many countries, and their markets are global; and there are various initiatives around the world whose themes are similar.
Thus, privileged relationships have been built with our Canadian and German colleagues which should be continued and strengthened. The labeling and standardization activities take on their full meaning in the European context, which itself relies on the French-German collaborative framework. The current network of interested parties for safe and trustworthy AI in critical applications involves other partners in Australia, Italy, Netherlands, USA, to mention a few.
Confiance.ai appears to be at the center of various European and international initiatives on trusted AI. By regularly inviting these international partners to our annual event, the Confiance.ai Day, by being invited to their main events, by involving them in initiatives such as the AITA and ATRACC[2] AAAI symposia in 2023 and 2024, and by establishing an agreement partnership with major international players in AI labeling, we are gradually building an international community of researchers and developers on the subject from which everyone can benefit in return.
Our main contributions in support of the AI Act
Confiance.ai’s industrial members are convinced of the need to improve trust in AI systems, particularly in the context of critical systems involving the health and safety of people and property. This conviction is in line with the objectives of European regulations on AI; Confiance.ai is materializing this ambition by developing and evaluating a set of tools and methods, as well as participating in the ongoing process of establishing standards for this purpose.
The AI Act expresses several requirements regarding the development of high-risk AI. These requirements include, among others, the quality of the data used during learning, the robustness and accuracy of AI systems, their transparency, and the methods used for their validation.
The Confiance.ai project has developed a set of methods and tools that meet a good portion of these requirements. While waiting for the application standards of the AI Act to be established by CEN/CENELEC, industry professionals, including the Confiance.ai consortium and anyone accessing the resources provided by the project (methodological guide, body of knowledge, taxonomy), can begin to assess the impact that the AI Act will have on their AI development cycle (MLOPS) and the type of solution they will need to integrate to be compliant.
We look at the developments of AI and ML with enthusiasm for their potential benefits, while being careful about their usage in critical industrial applications. Maximizing the potential of AI technology requires to manage adequately the associated challenges related to safety and trustworthiness. But trust and safety in this class of application are demanding for “traditional” deep learning systems as well as for LLMs and foundation models; our efforts towards building a methodology and software tools for the engineering of such applications are only a beginning.
Our results after four years of Confiance.ai are encouraging, so we will undertake new actions to meet the corresponding challenges. We believe that such an initiative demands a global and sustained effort to deepen research, preserve and maintain the program’s achievements, and industrialize the solutions we have developed. To tackle remaining challenges, we must ensure the continuity and long-term impact of our work, and we therefore welcome other international partners and programs to join us in these exciting developments
[1] Air Liquide, Airbus, Atos, CEA, Inria, IRT Saint-Exupéry, IRT SystemX, Naval Group, Renault, Safran, Sopra Steria, Thales, Valeo.
[2] AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC): https://sites.google.com/view/aaai-atracc