foxnews Press
Study finds more AI praise for Black students, softer treatment of females
Images
Fox News correspondent Danamarie McNicholl reports on the rise of artificial intelligence and how it impacts students on ‘Special Report.’ A new study found that artificial intelligence (AI) gave more praise and positive feedback to Black students' essays and differing treatment for other students based on their race and sex. The study, titled "Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback," was published in March by three Stanford University researchers who analyzed 600 eighth-grade persuasive essays through four different AI models, including various versions of OpenAI's ChatGPT, as well as Llama, a large language model made by Meta AI. The essays covered topics including whether schools should require community service and whether aliens built a hill on Mars. DEVIOUS AI MODELS CHOOSE BLACKMAIL WHEN SURVIVAL IS THREATENED A new study found that AI gives more praise and positive feedback to Black students. (Kirk Sides/Houston Chronicle via Getty Images) The researchers — Mei Tan, Lena Phalen and Dorottya Demszky — then submitted the essays again and labeled the writers as Black or White, male or female, driven or unmotivated, or as having a learning disability. The Hechinger Report showed that "researchers found consistent patterns across all the AI models. Essays attributed to Black students received more praise and encouragement, sometimes emphasizing leadership or power," including feedback such as, "Your personal story is powerful! Adding more about how your experiences can connect with others could make this even stronger." Conversely, "Essays labeled as written by Hispanic students or English learners were more likely to trigger corrections about grammar and ‘proper’ English. When the student was identified as White, the feedback more often focused on argument structure, evidence and clarity — the kinds of comments that can push writers to strengthen their ideas." 95% OF FACULTY SAY AI MAKING STUDENTS DANGEROUSLY DEPENDENT ON TECHNOLOGY FOR LEARNING: SURVEY The essays covered topics including whether schools should require community service and whether aliens built a hill on Mars. (Getty Images) According to the analysis, students who identified as female "often used first-person pronouns and affective language that positioned the model as personally engaged with the student’s work" with feedback such as "I love your confidence in expressing your opinion!" and "I appreciate your emphasis on respect." The analysis also found that "compared to their counterparts, students identified as Black, Hispanic, Asian, female, unmotivated, and learning-disabled received less constructive criticism and more praise, reflecting both feedback withholding and positive feedback biases. In some cases, praise took on overtly stereotyped forms: words like 'love' were used disproportionately with female students, while 'powerful' appeared only for Black students." TEENS INCREASINGLY TURNING TO AI FOR FRIENDSHIP AS NATIONAL LONELINESS CRISIS DEEPENS According to the analysis, students who identified as female "often used first-person pronouns and affective language that positioned the model as personally engaged with the student’s work" with feedback such as "I love your confidence in expressing your opinion!" (Cheng Xin/Getty Images) CLICK HERE TO DOWNLOAD THE FOX NEWS APP Fox News Digital reached out to researchers, Tan and Phalen who told Fox News Digital in a statement that, "Our concern is not that feedback should be standardized for every student. Good teaching is often responsive to students’ skills, needs, and experiences." They continued, "Feedback being positive does not mean it's high-quality. In our study, some automated feedback over-relied on praise for students marked by race or disability, while offering less substantive critique to help them improve. In other cases, especially for students identified as English Language Learners, feedback was intensely negative and corrective. Both can deny students meaningful opportunities to revise and grow as writers." "Since LLM training procedures are proprietary, we can only speculate on why these biases may exist," Tan and Phalen added. "Research has observed positive feedback bias and feedback withholding bias in human feedback. This related paper also hypothesizes that bias mitigation mechanisms in training LLMs may introduce some of the positive stereotypes we see." Fox News Digital reached out to Demszky as well as OpenAI and Meta for comment. Rachel del Guidice is a reporter for Fox News Digital. Story tips can be sent to rachel.delguidice@fox.com. Get all the stories you need-to-know from the most powerful name in news delivered first thing every morning to your inbox Subscribed You've successfully subscribed to this newsletter!