Large language models (LLMs) are becoming a significant source of content generation in social networks, which is a typical complex adaptive system (CAS). However, due to their hallucinatory nature, LLMs produce false information that can spread through social networks, which will impact the stability of the whole society. The uncertainty of LLMs false information spread within social networks is attributable to the diversity of individual behaviors, intricate interconnectivity, and dynamic network structures. Quantifying the uncertainty of false information spread by LLMs in social networks is beneficial for preemptively devising strategies to defend against threats. To address these challenges, we propose an LLMs hallucination-aware dynamic modeling method via agent-based probability distributions, spread popularity and community affiliation, to quantify the uncertain spreading of LLMs hallucination in social networks. We set up the node attributes and behaviors in the model based on real-world data. For evaluation, we consider the spreaders, informed people, discerning and unwilling non-spreaders as indicators, and quantified the spreading under different LLMs task situations, such as QA, dialogue, and summarization, as well as LLMs versions. Furthermore, we conduct experiments using real-world LLM hallucination data combined with social network features to ensure the validity of the proposed quantifying scheme.

Quantifying the uncertainty of LLM hallucination spreading in complex adaptive social networks / Hao, Guozhi; Wu, Jun; Pan, Qianqian; Morello, Rosario. - In: SCIENTIFIC REPORTS. - ISSN 2045-2322. - 14:1(2024), pp. 1-13. [10.1038/s41598-024-66708-4]

Quantifying the uncertainty of LLM hallucination spreading in complex adaptive social networks

Morello, Rosario
2024-01-01

Abstract

Large language models (LLMs) are becoming a significant source of content generation in social networks, which is a typical complex adaptive system (CAS). However, due to their hallucinatory nature, LLMs produce false information that can spread through social networks, which will impact the stability of the whole society. The uncertainty of LLMs false information spread within social networks is attributable to the diversity of individual behaviors, intricate interconnectivity, and dynamic network structures. Quantifying the uncertainty of false information spread by LLMs in social networks is beneficial for preemptively devising strategies to defend against threats. To address these challenges, we propose an LLMs hallucination-aware dynamic modeling method via agent-based probability distributions, spread popularity and community affiliation, to quantify the uncertain spreading of LLMs hallucination in social networks. We set up the node attributes and behaviors in the model based on real-world data. For evaluation, we consider the spreaders, informed people, discerning and unwilling non-spreaders as indicators, and quantified the spreading under different LLMs task situations, such as QA, dialogue, and summarization, as well as LLMs versions. Furthermore, we conduct experiments using real-world LLM hallucination data combined with social network features to ensure the validity of the proposed quantifying scheme.
File in questo prodotto:
File Dimensione Formato  
Hao_2024_ScientificReports_Quantifying_Editor.pdf

accesso aperto

Descrizione: Versione editoriale
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 2.31 MB
Formato Adobe PDF
2.31 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12318/147206
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact