Face recognition software is widely used today. It has proved very useful in many scenarios, especially in the security sector. Online it is used by social media platforms such as Facebook to identify where faces are in images. This simplifies the process of individuals tagging themselves and others in the images. But, we (myself included) do not consider what is actually happening to provide this information. Should we be worried that there is readily available software that can provide face verification, identify similar faces, group faces, and identify people. Databases of faces are being collated from surveillance cameras, news photographs, and online images. Searching on google reveals many of these databases are available for download. It is concerning to consider that our facial images may be used for unknown purposes and that the algorithm results in a dehumanisation of our faces.
Using ‘Face’ software from Microsoft (https://azure.microsoft.com/en-gb/services/cognitive-services/face/), I have been considering the use of facial recognition software similar to that used by Facebook and other social media sites.
The software runs an algorithm that determines the likelihood that two faces belong to the same person. The result is displayed as a confidence score of how likely it is that the two faces belong to one person. I ran an experiment using images I have created during the project. Of the 19 images I chose, 6 were determined to not be me. One was indeed a friend of mine, but the other 5 were me, altered by makeup, props or digital manipulation. I found it intriguing that I am able to change my appearance to ‘fool’ the software, even with just makeup and wigs.
The software can detect one or more human faces in an image. Each face is then surrounded by a square (pink for female and blue for male). The software can also prediction other features:-
As well as a further 27 landmarks for each face in the image.
Below is a screenshot of the result and a copy of the code that the software has produced for that image.
This experiment also produced interesting results with other images. My age varied according to the software and in some cases, it determined that I was male!
I then uploaded multiple image grids to see how the software handled this.
In some cases, faces were not recognised at all. In the first multiple image, 13 of the 15 images were recognised as faces. Of the 13, 3 were determined to be male.
The final part of the software determines the emotion of the face. It again results in a confidence score for each emotion for each face in the image. These emotions are:-
These emotions were chosen because they are universally understood and communicated across different cultures with particular facial expressions.
The software produces interesting results, but as my experiments show, the results are variable. Factors that affect the results are:-
Variation in colour of the faces with different lighting
Presence of glasses
Use of wigs and other props
Different angles of the face with respect to the camera lens
Tying back hair can cause confusion over gender
Selfie image vs portrait image shot with different lens (see below). The image on the left was shot with an 80mm lens on a Nikon d810 camera and the image on the right was shot with an iPhone (24mm lens)
There are many ethical issues that need to be considered longterm, but for now, as the software is not 100% accurate, we are faces in the crowd and not necessarily recognisable by a machine.
REFERENCES AND ALL SCREENSHOT IMAGES
Face API – Facial Recognition Software | Microsoft Azure. 2018. Azure.microsoft.com[online]. Available at: https://azure.microsoft.com/en-gb/services/cognitive-services/face/ [accessed 18 June 2018].