Testing and Trusting Machine Learning Systems
Received Date:January 26, 2021; Published Date: February 24, 2020
Machine learning systems are now all over the place. These systems provide predictions in a black box mode masking their internal logic from the user. This absence of explanation creates practical and ethical issues. The explanation of a prediction reduces relying on black-box traditional ML classifiers. Trustable Artificial Intelligence is the current area of interest. Testing of such systems has also not been formalized. We highlight these two issues in this paper.