A STUDY OF ALGORITHMIC BIAS WITH A FOCUS ON MITIGATION PRACTICES AND AN ANALYSIS OF DISCRIMINATION CONSCIOUS DATA MINING
Abstract
Algorithmic bias is a moral error within computer systems that is often left undetected due to a lack of set procedures. The aim of this study was to find the source of this bias leading to possible procedural solutions that can be applied widely. A meta- analysis, case study, and sample interview statistics are used to understand the multiplication of such bias into generated outputs. The study concluded that lack of diverse data leads to bias in output, in addition to a lack of awareness about the existence of such bias. This ignorance is amplified by the myth surrounding deep learning algorithms. The study recommends government intervention to set standards for AI development and further peer-reviewed research in the context of larger societal impact in the future.