Skip to main content

Improving Wikimedia resilience against the risks of content-generating AI systems

Project Member(s): Davis, M., Rizoiu, M., Ford, H.

Funding or Partner Organisation: Wikimedia Foundation Inc.
Wikimedia Foundation Inc.

Start year: 2023

Summary: The aim of this project is to explore the implications of content-generating AI systems such as ChatGPT for knowledge integrity on Wikipedia and to investigate whether Wikipedia rules and practices are robust enough to deal with the next generation of AI tools. Knowledge integrity is a foundational principle of Wikipedia practice: the verifiability of Wikipedia content is a core content policy and Wikipedians have been working to understand how to counter systemic bias on the project for almost two decades. The project will be driven by 3 research questions: RQ1: Does the potential use of ChatGPT and other AI chatbots threaten knowledge integrity on Wikipedia? If so, how? RQ2: To what extent are current Wikipedia policies and processes able to address any risk to information integrity posed by ChatGPT? RQ3: If there are gaps in these policies and processes which could be exploited or which may otherwise present risks for information integrity, what policy and process interventions may mitigate these risks? The project has two key objectives relating to the needs of a) the Wikimedia community and b) academic research community. First, we aim to develop a series of recommendations through community interviews and focus groups on possible next steps for governing Wikipedia’s approach to LLMs. Second, we aim to develop a significant contribution to digital media policy and information systems research by presenting an example model for governing AI-generated content in public information systems.

Publications:

Davis, M & Ford, H 1970, 'Verifiability, epistemic value and the open knowledge market', Wikihistories 2024: Wikipedia and/as Data, Brisbane.

Ford, H & Davis, M 1970, 'AI, verifiability and the epistemic commons', Policy & Internet Annual Conference, University of Sydney.

Ford, H & Davis, M 1970, 'The value of verifiability in the age of AI', International Association for Media and Communication Research, Christchurch.

Ford, H, Davis, M & Koskie, TB 1970, 'The implications of generative AI for Wikipedia as a locus of public knowledge production', International Communication Association, Gold Coast.

Attard, M, Davis, M & Main, L Centre for Media Transition 2023, Gen AI and Journalism, Sydney.
View/Download from: Publisher's site

FOR Codes: The media, Technological ethics, Communication technology and digital media studies, Fairness, accountability, transparency, trust and ethics of computer systems, Knowledge and information management