In general, for transformative technologies to be responsible, their applications need to be transparent, governable, sustainable, secure and integrated. Let us make this a bit more tangible by applying these criteria to the current state of the most popular of them all today: AI.
So how do we make an AI application responsible (and help securing the benefits that AI can bring to mankind)?
First, we deal with the issue of transparency. Like most transformative technologies, AI is not easy to understand. And we do not have to, if we can monitor it properly. AI scientists have already invented many types of metrics to assess the behavior of AI, both in terms of technical performance and resources as well as in terms of the modelled and trained behavior behind it. Any AI responsible application should make these metrics clear and understandable. Moreover, it should be made easy to identify source data, observe the behavior and label its output as AI generated.
Second, we address AI application governance. Obviously, it should comply with the EU Artificial Intelligence Act as well as with any national legislation on this subject. But, especially for AI, there should also be an ethics protocol in place, dealing with issues like moral behavior, fairness, inclusion, ownership and serving some public interest. And yes, there are metrics for these types of issues too. A responsible AI application will use them.
Third an AI application should be sustainable, meaning it should be made future proof. In a fast-developing field like AI this seems hard to achieve on application level, especially if an organization is planning to deploy multiple AI applications from multiple vendors. Therefore, a responsible AI application does not stand on its own but is built on a development platform that will pass on all new AI possibilities to the application AND to all the other AI applications beside it on the same platform, which will also serve another sustainable purpose: AI model agnosticism.
Fourth comes security, which means the need for a responsible way an AI application retrieves, processes, stores and deletes data, especially if it concerns personal or private stuff. Another specific AI security measure is taking precautions to avoid large AI models outside the organization using the internal output to train their own model. A third point worth mentioning is the way an organization has applied roles and rights to work with its data and its reflection on the way an AI application is made available to the workforce. And finally, security might also refer to taking responsibility for the cyber-security of the development platform and its AI applications.
The fifth issue is perhaps the most transformative one. An AI application should probably never exist as some stand-alone gadget, residing on its own among other applications. We already looked at this from a platform angle, but what we mean here is the idea that an AI application is almost never a goal in itself, but rather tries to attain something that adds value to some other application. In our vision a responsible AI application should be connected to exactly the part it is trying to support to become truly transformative.
If you want to skip this long text example, please check our fields of application page.