Microsoft disables facial recognition, AI may not think what you express

This article comes from Ai Faner

Expressing emotions with facial expressions is an almost innate ability of everyone. People are also used to facial expressions to guess other people’s emotions. However, with the rapid development of technology, AI (artificial intelligence) can also recognize people’s facial conditions and emotions. expression.

▲ Picture from: ResearchGate

Not long ago, undefined, which has been working on developing facial recognition technology, released a guide on the topic ‘A framework for responsibly building artificial intelligence systems’. Openly shares Microsoft’s Responsible AI Standards, the framework that guides how Microsoft builds AI systems.

▲ Picture from: Microsoft

The article mentions that Microsoft has decided to retire facial recognition features, Azure Face Services, because these features can be used to attempt to infer emotional states and internal identity attributes, which, if abused, could expose people to stereotypes, discrimination, or unfair denial of service .

Currently, face services are only available for undefined hosting customers and partners, existing customers will have one year to transition, but must stop using these features by June 21, 2023; new users can use the face recognition application form Request access.

▲ Picture from: Microsoft

Microsoft isn’t disabling the feature entirely either, but incorporating the technology into ‘controlled’ accessibility tools, such as ‘Seeing AI’ for visually impaired people. It can describe objects and text for the visually impaired, read signs, decipher someone’s facial expressions, provide navigation, and more. It currently supports English, Dutch, French, German, Japanese and Spanish.

▲ Picture from: Microsoft

The guidance, released by Microsoft, explains the tech company’s decision-making process, including a focus on principles such as inclusivity, privacy and transparency, and is the first major update to the standard since its launch in late 2019.

Microsoft said it made such a big change to facial recognition because it recognized that for AI systems to be trustworthy, they needed to properly address the problems they were designed to solve.

▲ Picture from: Microsoft

As part of efforts to align the Azure Face service with responsible AI standards requirements, Microsoft will also deprecate inferring emotional states (happy, sad, neutral, angry, etc.) and identity attributes (such as gender, age, smile, facial hair, hair and makeup).

In the case of emotional states, Microsoft decided not to provide open API access to technologies that scan people’s faces and claim to be able to infer their emotional state based on their facial expressions or movements.

▲ Picture from: Microsoft

Microsoft’s page shows that it can be identified by 27 facial signs of people, and there are various facial attributes that can be judged, such as whether a given face is blurred, whether it has accessories, estimated gender and age, whether it wears glasses, and the type of hair. , Whether you wear glasses, whether you have a smile…

Experts inside and outside Microsoft have highlighted the lack of scientific consensus on the definition of ’emotion’, as well as the high level of privacy concerns surrounding this ability. So Microsoft also decided that it needed to carefully analyze all AI systems designed to infer people’s emotional states, whether they used facial analysis or any other AI technology.

▲ Picture from: Microsoft

It is worth mentioning that Microsoft is not the only company that has taken a serious look at facial recognition. Arvind Krishna, CEO of undefined, has also written to the US Congress, revealing that the company has withdrawn from the facial recognition business. The reason for the two companies to make this decision is inseparable from the previous sensational ‘Floyd’s death’ incident.

▲ Picture from: BBC

Because of the fear that this technology may provide monitoring tools for law enforcement agencies, which will lead to some human rights violations, and also worry that the privacy of citizens will be leaked, and the current US legislation in this regard is not very perfect.

Therefore, these companies with technology decided to start by restraining themselves, so that the technology will not be abused, and the privacy and human rights of citizens should be more protected. When the use of a technology is not constrained by perfect specifications, it may be a better choice to constrain the technology development itself.

The text and pictures in this article are from Sina.com

loading.gif

This article is reprinted from https://www.techug.com/post/microsoft-has-disabled-the-facial-recognition-function-what-ai-thinks-may-not-be-what-you-c123337e973cd7019b85/
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment