in commentary, general, observations, tech

GPT based “Denial of Information” attack

Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. Our analysis of a selection of
— Read on misinforeview.hks.harvard.edu/article/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation/

I think we can define a new type of attack on the Internet. Much like the Denial of Service attack makes a service unavailable to ordinary users, a Denial of Information attack makes readily searchable information obscure by inundating it with generative AI based nonsense or outright misinformation.

This can be both a malicious attack or negligence.

A malicious attack would be threat actors specifically targeting information silos such as social media or SEO with misinformation intended to influence society.

A negligent attack is either in the form of misguided attempts by end users to use LLMs to churn out content faster, thereby inundating traditional systems with unverifiable data; or a negligent attack can come in the form of data retrieval infrastructure (such as search engines or LLMs) using generative AI to compile information without adequate gates to verify such information.

A Denial of Information attack is more insidious than a Denial of Service attack because it’s much more difficult to detect and even harder to neutralize due to the individualistic nature of information retrieval and consumption.

What do you think?

Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.