29 Oct 2018 |
World innovation news |
Software Systems, Multimedia and Cybersecurity , Intelligent and Autonomous Systems
A Biometric Mirror to Question AI Ethics
Header image purchased on Istock.com. Protected by copyright.
Researchers at the University of Melbourne have created a mirror that does exactly the opposite of what is expected. Instead, it is the mirror that looks at us, or even scrutinizes us from within. The team has created a system that can make deductions from a person’s facial features, physical and psychological characteristics. They named it the biometric mirror. It is an interactive application that aims to make people aware of some current and future uses of artificial intelligence. The project was carried out in collaboration with Science Gallery Melbourne and will be exhibited next year.
Mirror Operation and System
The technology includes a screen and a camera that takes pictures of a person gazing at it to analyze their facial features. To do this, it deploys an algorithm that reveals 14 personality characteristics. In addition to age, sex and ethnic group, the mirror reveals a psychological and interpersonal profile. Psychological traits such as sociability, introversion, kindness or aggressiveness are quantified as low, medium or high.
The longer a person stares at the mirror, the more characteristics the system provides about their psychological profile. In fact, the camera continues to capture images in order to analyze reactions to some questions. The results are displayed on the screen as they become available.
The algorithm analyzes photos of participants from a database. It includes thousands of pictures of people previously annotated by respondents who participated in the study and whose observations were based on psychometric procedures. When a person stands in front of the biometric mirror, features and mimicry are compared to the system’s data.
A Social Awareness Concept
What is the purpose of this system? It is not difficult to imagine the applications of a biometric mirror whether for criminal profiling, or personal development games. However, the idea is to be aware of the issues linked to this technological breakthrough.
According to Niels Wouters, a postdoctoral researcher at the Microsoft Research Center for Social Natural User Interfaces at the Melbourne School of Engineering and lead author of the study, the system’s artificial intelligence is accurate and works properly. However, its results are not exact because its analysis is based on personal judgments. Psycho-technical observations are comprised of subjective information and these observations do not really correspond to the person, but rather to the impressions given to others.
The main purpose of the application is to highlight the urgency of a debate on ethics and artificial intelligence, a subject often reserved for specialists and where the public does not often have the opportunity to voice an opinion. The researcher points out that face recognition technologies are included in some urban safety projects in several countries like China and in the city of Perth, Australia.
The study aims to highlight the impact of mass data processing technologies on privacy rights. Artificial intelligence systems analyze behaviours of people using the internet and public spaces to identify models and propose predictive technologies.
In order to enhance awareness in participants who will use the biometric mirror, the system is programmed to ask embarrassing questions, such as:
- How would you feel if your attributes were disclosed?
- How would you feel if you learned that you did not get a job because artificial intelligence gave you a low confidence level?
- How would you react if the authorities considered you to be a potentially aggressive person?
An experiment worth following!
Hanen Hattab is a PhD student in Semiology at UQAM. Her research focuses on subversive and countercultural arts and design practices such as artistic vandalism, sabotage and cultural diversions in illustration, graphic arts and sculpture.