More of the Same: Generative AI and Content Conformity Through Feedback Loops and Simplification of Source Material

As artificial intelligence (AI) goes mainstream, with competitive applications threatening jobs in the creative business – advertising, journalism, and design -, should we begin to care about another dystopian scenario: that of growing content futility?

With more and more users employing generative AI, be it to write ads, university assignments, or informative content, it is unlikely that the apps they choose will rely on a diverse corpus of documents and data. On the contrary. AI, as of today, cannot differentiate in a satisfactory manner which texts are
– of profound professional quality,
– acceptable by academic standards,
– based on news values and ethics, and
– unaffected by the non-factual.

In order to circumvent conspiracy theories, for example, the cheaper ones of generative AI applications might select popular texts, with the most views, as sources. More mainstream textual AI applications will run algorithms allowing for a reliance on higher-quality documents. There are limits to the capacity of rephrasing, so that some users will end up with very similar texts as those published by their peers. This will be the case wherever generative AI has not been forbidden by university faculty or senior staff.

Either way, on the short and medium run, one will have to expect feedback loops leading up to more of the same content than we are used to. This might, in some cases, lead to a ‘freeze’ of answers to research questions or the treatment of topics, many valid and new approaches being neglected by global research.

On the longer term, to maintain originality, academics will have to rely on deep-delving studies on the state of research, while journalists will continue employing elite researchers on the hunt for quality material. There is some reason to hope that academic journals will set up clear barriers to research using generative AI. At least as long as choices made by AI applications are simplistic and lazy.

We may also see more specialized databases containing avant-garde scientific articles, with better gate-keeping and verification of validity. Such databases, as well as high-quality journalistic texts, will, in sum, likely be behind paywalls more frequently than before, leading to two classes of readers: a majority of those content with more basic information, on the one hand; and a sophisticated minority enjoying more complete and novel content.

According to all likelihood, content futility will not come about. We will continue to have access to a diverse supply of information, however with a lower standard of quality, for those able to discern it. Due to a lack of access to pay content by most AI apps, and – to mention it again – a downward feedback loop affecting content, there will likely be more uniformity of information in the online world.

Thorsten Koch, MA, PgDip
Policyinstitute.net
01 April 2023

Author: author

Leave a Reply

Your email address will not be published. Required fields are marked *