Bias in Large Language Models
The Focus Group “Bias in Large Language Models” involves Dieter Schwarz Fellow Prof. Gianluca Demartini (The University of Queensland) and his host Prof. Maribel Acosta Deibe (Data Engineering, TUM School of Computation, Information and Technology).
It explores how to design novel methods to predict the complexity of questions submitted to Large Language Models (LLMs). This then allows us to design novel algorithms to decide, given a user question, which is the most appropriate LLMs to query from an ecosystem of available LLMs, and how to combine their answers together to minimize bias and maximize diversity in the answer given back to the end user. We experimentally evaluate this in the context of political bias where we have shown that LLMs may take certain political stances. This research enables the development of more fair and safer AI systems that can be used by everyone reducing the risks to negatively impact the democratic process and society.