Machine





Image containing 9 individual images of the artist's face and shoulders. Artist is looking at the camera with head turned in slightly different directions in each photo.

Facial Landmark Rules:

Brows: 5 Points and 4 Lines Each

Eyes: 6 Points and 6 Lines Each

Nose: 9 Points and 8 Lines
4 Points from Bridge To Tip of Nose
5 Points from Left to Right Nostril

Lips: 20 Points and 20 Lines
10 Points for Upper Lip
10 Points for Lower Lip

For Each Lip:
5 Points on Outer Edge
3 Points on Inner Edge
Corners overlap

Jaw: 17 Points and 16 Lines
8 per side ending at Ear


Image containing 9 individual images of the artist's face and shoulders. Artist is looking at the camera with head turned in slightly different directions in each photo. Facial recognition wireframe is overlayed on image.



This is how the system maps and labels the features of the face.


Under this system, my face is recognized with a maximum confidence of only 89 percent across 9 angles. In one image, my face is not recognized at all. In the others, my gender is incorrectly and confidently reported at a high of 95 percent confidence and a low of 62.















How can I convince this machine of my gender?















How can I communicate something that is the sum of so many little moments, gestures, and decisions?



The relative lightness of my skin serves to make me more legible to facial recognition software as studies have shown that these technologies often fail to accurately recognize dark-skinned faces. Yet as this technology calibrated to whiteness gazes upon my face, I am still misgendered.

As these technologies are finding their way into the hands of law enforcement, I am left thinking about how trans people of color are particularly threatened by the intersection of this transphobic and racist tech with transphobic and racist policing.



Consider another set of faces:




Image contains 9 portrait images of different genders and races, all faces look real but were created with AI.




None of these faces belong to a human body, they were generated by another algorithm, specifically, a generative adversarial network trained on 70,000 faces and tasked with creating new ones. They are composite images: an assembly of textures, shapes, and colors gathered into formations according to the algorithm's rules for what constitutes a face.















What happens when we point one algorithm at another?















Facial Landmark Rules:

Brows: 5 Points and 4 Lines Each

Eyes: 6 Points and 6 Lines Each

Nose: 9 Points and 8 Lines
4 Points from Bridge To Tip of Nose
5 Points from Left to Right Nostril

Lips: 20 Points and 20 Lines
10 Points for Upper Lip
10 Points for Lower Lip

For Each Lip:
5 Points on Outer Edge
3 Points on Inner Edge
Corners overlap

Jaw: 17 Points and 16 Lines
8 per side ending at Ear





Image contains 9 portrait images of different genders and races, all faces look real but were created with AI. Image also includes facial recognition overlay.

















What are they saying to each other?















In their conversation, the composite faces are more legible than my own. They see each other with greater certainty and clarity. They can agree on what a face is and place it into specific emotional and gender categories.



















(Note: This is the low resolution/bandwidth version of the projest. For other viewing options, visit https://landmarks.cloud/ using a non-mobile device.

More information on this project can be found by viewing the source code for the project HERE.)