ChatGPT and the “Report effect”

Enter your text: who is stefano quintarelli ?

Stefano Quintarelli is an Italian professional cyclist. He has competed in numerous races and is a former Italian National Road Race Champion. He has also competed in the Giro d’Italia, Tour de France and Vuelta a Espana.

Enter your text: is stefano quintarelli a cyclist ?

No, Stefano Quintarelli is not a cyclist. He is a professional Italian racing driver who has competed in the Superstars Series, the International GT Open and the FIA GT Championship.

Enter your text: is stefano quintarelli a racing driver ?

No, Stefano Quintarelli is a professional racing team manager and race engineer.

Enter your text: is stefano quintarelli a team manager ?

No, Stefano Quintarelli is not a team manager. He is a professional soccer player who currently plays for Italian Serie A club Brescia.

Enter your text: is stefano quintarelli a soccer player ?

No, Stefano Quintarelli is not a soccer player. He is an Italian race car driver who has won the 24 Hours of Spa and the 24 Hours of Nürburgring.

I’ll stop here.
having won the 24 hours of spa and nurbugring is quite satisfying to me.
🙂

In the above examples we see how ChatGPT integrates two functions that are separate in search engines: defining sources and finding the result.

But is this integration reliable ?

Report is an Italian television program classified as investigative journalism.
Report has been and is criticized for its reports considered “thesis-driven” supported by instrumental questions and targeted cuts of portions of interviews.
Its reports are often criticized online by people with in-depth knowledge of the topic the episode is about, deeming it biased or inaccurate.
The resulting impression is that if for topics that one knows well, the content is trivial and unreliable, it is likely to be so when it covers other topics that one does not know well.

In this sense, this “Report effect” applies to ChatGPT: if we know the topic well, its usability limitations seem obvious to us. Probably then it will be the same for topics we do not know well.

Can we trust it ? It depends. It depends on the use and how much we are bound by responsibility for the text produced.

In the end it will be an amplifier of our capabilities, like a machine translation tool that does the bulk of the work but then a human has to put his or her own spin on it. On closer inspection an amplification of text generation capabilities similar to the amplification of calculation capabilities we experienced with spreadsheets. (I realize it is qualitatively different, but grossly, to make the general sense of what I am saying)

We live in a time when Section 230 (which establishes legal liability exemptions for platforms) is under pressure because the Western world is moving in a direction of holding platforms accountable for the content they promote and produce.

ChatGPT will not be exempt from this liability of intermediaries.

The non-vertically integrated model largely shields the platforms from accountability.

Who is responsible for choosing the information that a user uses with chatGPT and on which he perhaps bases working decisions ?

We’re just at the beginning; let’s not underestimate the responsibility issues deriving from it.

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *