Realizing Trustworthy AI — From Theory to Realization — “Technical robustness and safety”

In the last chapter we talked about the first requirement “Human agency and oversight”. Today we will look into the second requirement “Technical robustness and safety”. This requirement is concerned with risk assessment and prevention of harm.

Resilience to attack and security

As with any other system, any AI system needs to be resilient to hostile attacks from the outside and inside. AI hacking extends the normal range of cyber security measures as also the models and the incoming data needs to be protected and screened. Protection against data poisoning and model leakage is normally not familiar for cyber security teams and thus they need to actively extend their processes and procedures to protect AI systems against those AI specific threats. We discussed Microsoft’s chat bot which had been deliberately hacked by data poisoning. Furthermore you see that this requirement contradicts the first requirement in some aspects. As model leakage can lead to threats to the system, it should be avoided. But if you see it as model transparency, it should be sought after.

Risk Management / General Safety

As AI systems are used more and more in critical tools and products, risk management needs to be woven into the general project management of the AI development and should be enforced by general company governance.

Accuracy, reliability and reproducibility

Psychological tests, medicine and AI systems have a lot in common. They all are measured and judged by accuracy and reliability. Accuracy is defined as the value of how close the final result is to the actual or actual value. It measures how big or small your overall error is. Reliability however measures how much or little the errors are spread.

I know this can be confusing, so let’s look at the famous archery example which every first semester student in experimental design classes is tortured with.

Imagine you have a famous archer. He shoots ten arrows at the target.

  • All ten hit the bull’s eye. Then this trial run was both be accurate and reliable.
  • If the archer however would have put all ten arrow at the top left corner of the target, his trial run would still be reliable, because he constantly reproduced the same result. However it would be not accurate.
  • On the other side, if he would have spread them unevenly, but all close around the bull’s eye, then he would still have an accurate but unreliable result.

Highly reliable and reproducible AI systems help developers, scientists and law makers to accurately describe what the AI system does. So your AI team should seek after it. Replication files, as suggested by the HLEG, can “facilitate the process of testing and reproducing behavior”.

In the next chapter, we will look into the principle of “privacy and data governance”.

— —

If you are interested in A.I. leadership education, ethical evaluation of your A.I. system or want to start your A.I. journey, just contact me at ansgar_linkedin@goldblum-consulting.com

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ansgar Bittermann

Ansgar Bittermann

AI Evangelist — CEO of Goldblum Consulting