Along with the search and other algorithmic biases already in place, the swift growth of AI tools has also increased bias in information retrieval that is often not obvious until one looks very closely. AI is trained on human created content, and therefore is not free of the biases that we all recognize. However, the aura of AI tools, like search algorithms, is one of neutrality.
Here Master Inventor, Martin Keen, of IBM explores the root causes and consequences of algorithmic bias in AI systems. The video examines real-world examples across various sectors, illustrating the potential for skewed outcomes. Learn practical mitigation strategies for building fairer and more ethical AI.
Safiya Noble is an Assistant Professor in the Department of Information Studies in the Graduate School of Education and Information Studies at UCLA. In her PDF 2016 talk, Noble explains why we should care about commercial spaces dominating our information landscape.
AI tools are known to produce inaccurate information, sometimes fabricating data, sources, and even complete citations. These fabricated sources, including full but nonexistent citations, are often referred to as 'AI hallucinations.' When using AI tools for research or information gathering, it's crucial to verify the accuracy of all information, claims, and generated sources. One way researchers can check a citation's validity is to copy the article title into Google Scholar or the library catalog. If no matching results are found, or if elements like the author and journal title differ, the citation may be an AI hallucination.
Are your news sources diverse? How do you encounter viewpoints different from your own? Can you listen and try to understand the other side?
If the answer is no, you may be in a filter bubble!