Realizing Trustworthy AI — From Theory to Realization — “Non-Technical methods to realize Trustworthy AI”

Ansgar Bittermann
3 min readMay 6, 2021


Mimzy on Pixabay

In this chapter we are talking about the non-technical methods to realize the seven requirements of “Trustworthy AI”. In comparison to the last chapter, they all have a non-technical, regulatory and organizational character. As with the technical methods, this list is evolving over time and should not be seen as finished.

The methods described by the HLEG are:

  • 1. Regulation
  • 2. Codes of Conduct
  • 3. Standardisation
  • 4. Certification
  • 5. Accountability via governance frameworks
  • 6. Education and awareness to foster an ethical mind-set
  • 7. Stakeholder participation and social dialogue
  • 8. Diversity and inclusive design teams

Regulation vs Codes of Conduct

Regulations are external rules as Codes of Conduct are internal rules. Both will have to be revised in the light of artificial intelligence. As companies will have to wait until governmental regulations will be ready to be used, each company already has the ability to include the Ethical Guidelines of the EU in their “Codes of Conduct” to ensure that the company will follow it. “Codes of Conduct” are not binding for the company, but companies normally develop processes and internal rules out of these and thus reflect these codes within their living culture and can “force” employees to follow it by stating it in their working contract.


Following international standards like ISO is not mandatory for every company in every industry, but it shows customers a dedication to certain goals like good management or IT security. Many companies are already using existing standards and norms to apply them to AI. The Institute of Electrical and Electronical Engineers (IEEE) released the IEEE P7000 in 2016 — a Draft Model Process for Addressing Ethical Concerns During System Design. This already has an ethical focus, but still addresses all kind of systems and not just AI systems. In the future we will probably see standards just designed for Trustworthy AI.


In the old days, we could rely on a bachelor or master degree in a certain field, but today the universities do not seem to be able to keep up with the change in IT and AI. Thus extra-universal degrees and certifications seem to be better suited for showing expertise in a given field in IT or AI. Coursera, one of the world’s largest providers of online training for example offers the IBM AI Engineering Certificate or IASSC offers Lean Six Sigma certifications for process optimization. Over the time these certifications will probably standardize more and will follow the AI pipeline and AI life cycle.

Education and awareness to foster an ethical mind-set

Certifications are normally acquired outside of the company, while inhouse education reaches all employees in the company. Companies, which invest in inhouse education, will be able to ensure that awareness for ethical topics will be present in all employees. Change the system, change the minds.

Accountability via governance frameworks

It is a good practise to dedicate specific people in a company to a specific topic. Otherwise everyone thinks that “the other” is doing it and then in the end no one has done anything. Thus appointing a person or a team to ensure that internal frameworks are followed, certifications are obtained, standardization is aimed at and educational awareness courses have been attended, will yield the best results.

This concludes our discussion about the EU framework of “Trustworthy AI”.

— —

If you are interested in A.I. leadership education, ethical evaluation of your A.I. system or want to start your A.I. journey, just contact me at