By Junior Bernadin
As technology advances and artificial intelligence (AI) becomes more prevalent in our daily lives, it is crucial that the creators and developers of these technologies accurately represent and include all diverse groups.
For example, on Jan. 3, I generated a series of about one hundred images using the word “beautiful” in conjunction with the words “kids,” “baby,” and “girl” on Dall-E, a tool that generates images using AI based on written descriptions.
Unfortunately, as seen in my recent experiment with Dall-E, hardly any Black babies, kids, or girls were represented in the art generated when using the term “beautiful baby,” “beautiful kids,” or “beautiful girls”.
The word “beautiful” is often used to describe art and photographs, and it can be especially significant when it is used to describe images of Black people. There are a few reasons why this is the case.
Historically, Black people have often been marginalized and excluded from mainstream definitions of beauty. This has contributed to harmful stereotypes and biases about Black people’s appearance.
When Black youth are rarely represented in a positive light in mainstream media and art, it can be difficult for them to feel confident and valued in society.
This lack of representation is not only harmful and offensive, but it also negatively impacts the development and accuracy of AI systems.
It is my belief that if there isn’t enough representation present in the data sets and those involved in the machine learning process like developers and stakeholders, bias occurs in the AI.
In other words, if the people creating and inputting data into these systems are not diverse and representative of different cultures and communities, the resulting AI will also be biased and exclude certain groups.
This is especially concerning regarding facial recognition technology, which has been shown to have significant biases against people of color.
For example, in a study conducted by the National Institute of Standards and Technology (NIST), it was found that “most facial recognition algorithms were more accurate for lighter-skinned males than darker-skinned females” (NIST, 2019).
The technology is more likely to recognize and identify white men accurately while consistently misidentifying and excluding Black women.
These biases and exclusions have real-life consequences, as seen in the case of biometric border control systems. In 2018, the Electronic Frontier Foundation (EFF) reported that “customs officers have been using facial recognition to screen travelers at border crossings, including US citizens, for several years” (EFF, 2018).
However, the technology has been shown to have higher error rates for people of color, leading to false positives and wrongful detentions.
The lack of diversity in the development and creation of AI also perpetuates harmful stereotypes and reinforces systemic racism.
For example, the lack of beautiful black babies, kids, and girls in the Dall-E-generated images reinforces the harmful stereotype that beauty only exists in certain races and further perpetuates the exclusion of black women and children from societal beauty standards.
Diverse developers and scientists must be included in creating and developing AI systems to ensure that the technology accurately represents and consists of all groups.
Junior Bernadin is dean of students and director of information technology for the Ron Clark Academy.
The post Dear AI, Black youth are beautiful, too appeared first on AFRO American Newspapers .