“lab practicum for bias in algorithms” by ameet soni and krista karbowski thomason

Abstract

This can be a course assignment to show potential biases encoded in algorithms (this is often linked more particularly to natural language processing, machine learning, or artificial intelligence) while using Word Embedding Association Test. In lab, students works with programs that report the effectiveness of word embedding algorithms to find relationships between words. Then, students uses an implementation from the formula in “Semantics derived instantly from language corpora contain human-like biases” by Caliskan et al. to identify gender and racial bias encoded in word embeddings. A job has students design and run a test while using WEAT formula to identify another type of bias (e.g., religion, nationality, age). A job also presents a genuine-world scenario that utilizes natural language systems inside a healthcare settings. Students must apply the things they learned within this module to go over ethical concerns using the suggested system.
It’s advised to make use of the slides to provide the fabric and goals from the assignment. The code and directions for that software are openly on GitHub.

Resourse: https://works.swarthmore.edu/dev-dhgrants/27/

What is Practicum?