This is how the system maps and labels the features of the face.
Under this system, my face is recognized with a maximum confidence of
only 89 percent across 9 angles. In one image, my face is not recognized
at all. In the others, my gender is incorrectly and confidently reported
at a high of 95 percent confidence and a low of 62.
How can I convince this machine of my gender?
How can I communicate something that is the sum of so many little
moments, gestures, and decisions?
The relative lightness of my skin serves to make me more legible to
facial recognition software as
studies
have shown that these technologies often fail to accurately recognize
dark-skinned faces. Yet as this technology calibrated to whiteness gazes
upon my face, I am still misgendered.
As these technologies are finding their way into the hands of law
enforcement, I am left thinking about how trans people of color are
particularly threatened by the intersection of this transphobic and racist
tech with transphobic and racist policing.
Consider another set of faces:
None of these faces belong to a human body, they were generated by another algorithm,
specifically,
a generative adversarial network
trained on 70,000 faces and tasked with creating new ones. They are
composite images: an assembly of textures, shapes, and colors gathered
into formations according to the algorithm's rules for what constitutes a
face.
What happens when we point one algorithm at another?
What are they saying to each other?
In their conversation, the composite faces are more legible than my own.
They see each other with greater certainty and clarity. They can agree
on what a face is and place it into specific emotional and gender
categories.
(Note: This is the low resolution/bandwidth version of the projest.
For other viewing options, visit https://landmarks.cloud/ using a non-mobile device.
More information on this project can be found by viewing the
source code for the project HERE.)