In creating this charter on the ethics of Artificial Intelligence (AI), we wish to assume the responsibility of a company engaged alongside armed forces and intelligence agencies on a daily basis.
Our guiding principle is simple: the use of our software should preserve the role of humans in the decision-making process and help leaders make better decisions. In practice, this means that we exercise significant ethical control over our technology throughout its life cycle - from the design stage up to the feedback one - and that we apply internationally recognized ethical standards to our business development.
This guiding principle is reflected in the following seven-point charter:
Due to the nature of its activities, Preligens operates within constantly evolving national and European legislative and regulatory frameworks, which require strict compliance with a number of specific rules, particularly in the areas of export or data protection. These different standards form the basis of Preligens' compliance policy, which is essential to our overall ethical approach. In the same spirit, we follow with interest the recent European recommendations on AI as well as the debates on the ethics of new technologies led by recognized institutions or organizations such as the European Union (EU), the OECD or the UN.
We ensure that our staff is made aware of the ethical issues surrounding AI. From the engineers developing the algorithms to those who have less computer skills of their own, everyone must understand the effects of technologies applied to the intelligence and defense fields - their potential, their power, their limits. There are two ways of achieving this objective: targeted training when employees first join the company, and regular presentations on the company's technological and commercial objectives. From this internal conversation, a constructive exchange between employees and management should follow.
Upstream of our research and development (R&D) efforts, we establish a close dialogue with the scientific research ecosystem in order to be able to reconcile collegiality, performance and judgement; because we believe that being at the cutting edge of technology of this century leads to greater precision and therefore to more confidence in the decisions produced. Through the participation of our engineers in academic seminars or by hiring researchers or doctors, we contribute to the advancement of the state of the art in AI. We regularly return this contribution through publications, articles or conferences. This is also why our AI research department integrates ethical considerations from the very first steps of our innovative concepts.
In the design and development phases, our teams ensure that the issues raised by our clients are understood within the company so that the technology chosen is adapted to the use case. This includes optimizing the accuracy of the analyses and their level of information. We therefore adapt our tech to the level of explicability and criticality required by the use case. We minimize cognitive and technical biases so as not to unbalance the decision-making process and ensure the best possible quality of analyses. Our culture also imposes a systematic testing of the code in order to validate the relevance of our deliverables to user needs. We systematically store and track all data used for training to support a process of continuous algorithm improvement.
To protect the fruits of this development process, we are constantly increasing the robustness of our algorithms and computer systems. The models we deliver are protected and encrypted. We adapt to cyber threats, notably by training our algorithms on datasets structured around a proprietary annotation system that is automatically traced within our information system. In order to make our algorithms as robust as possible to errors, we continually analyze their areas of operation and areas for improvement. To achieve this, we test them on our large proprietary test bases that are representative of our customers' use cases. This allows us to continuously improve our algorithms, and to deliver new versions to users every six weeks on average.
Mindful of the fact that our technologies serve security and protection objectives in a particularly sensitive context - mindful also that some of our software's functionalities can be misused - we have established strict sales rules. The resulting geographical scope is based on the application of national, European and multilateral policies regarding trade restrictions in the field of international security and the fight against the proliferation of conventional arms; governmental precautions we couple with our own risk assessment and mapping of certain countries. This assessment may also be based on data from recognized international organizations on comparative analysis of the rule of law and fundamental rights (press, association, belief, voting, etc.).
If the relevance of our algorithms allows us to process an ever-increasing volume of information and sensors, we ensure that they are validated with real data before reaching the stage of explaining to our clients how they work. We have a system that allows users to annotate their own data and measure the performance of the algorithms independently. In addition, Preligens is available to assist users in synthesizing the performance analysis. This process is concluded with detailed feedback allowing for the co-construction of the delivered software, where developers and users participate together in the improvement of the products. We also provide our clients with comprehensive training to ensure that the end users have a sufficient level of mastery of the techniques delivered. This training is also provided each time a new feature is delivered.
Released in May 2021