Microsoft has decided to thoroughly review its ethical regulations on the use of artificial intelligence, as indicated on its official blog, with a view to defining a real framework at the basis of responsible development of AI systems.
As part of this decision, the company has also opted for the withdrawal of various face analysis tools based on artificial intelligence, including one, in particular, capable of identifying the emotions of a subject through video and image analysis. In particular, experts have criticized the tools for recognizing emotions, judging the comparison of external manifestations with respect to internal motive states to be unscientific, underlining how facial expressions vary according to the reference populations and cultures.
Therefore, Microsoft has decided to restrict access to certain functions, as stated in the disclosure notes it publishes. In particular, the decision concerns Azure Face services for facial recognition, while other services will be totally removed. The decision will not cover services that are considered harmless or useful, such as automatic blurring of faces in images and videos for privacy reasons. These latter services will remain open to use.
| ); }
While public access has been revoked and the process will continue until 2023, Microsoft will continue to use some of these features in some products, including Seeing AI, an app that uses machine vision to describe the surroundings of visually impaired and blind people.
This is not the first time that an AI tool capable of recognizing emotions has been at the center of controversy, we recall, for example, the project by Intel and Classroom Technologies for the recognition of the emotional states of students, whose criticalities were also highlighted by Protocol. We refer you to reading the article in question for further details.
As part of this decision, the company has also opted for the withdrawal of various face analysis tools based on artificial intelligence, including one, in particular, capable of identifying the emotions of a subject through video and image analysis. In particular, experts have criticized the tools for recognizing emotions, judging the comparison of external manifestations with respect to internal motive states to be unscientific, underlining how facial expressions vary according to the reference populations and cultures.
Therefore, Microsoft has decided to restrict access to certain functions, as stated in the disclosure notes it publishes. In particular, the decision concerns Azure Face services for facial recognition, while other services will be totally removed. The decision will not cover services that are considered harmless or useful, such as automatic blurring of faces in images and videos for privacy reasons. These latter services will remain open to use.
| ); }
While public access has been revoked and the process will continue until 2023, Microsoft will continue to use some of these features in some products, including Seeing AI, an app that uses machine vision to describe the surroundings of visually impaired and blind people.
This is not the first time that an AI tool capable of recognizing emotions has been at the center of controversy, we recall, for example, the project by Intel and Classroom Technologies for the recognition of the emotional states of students, whose criticalities were also highlighted by Protocol. We refer you to reading the article in question for further details.