In addition to the various privacy problems encountered in Italy, the "Terminator" scenario, the spread of fake news and the schoolwork of male and female students, ChatGpt has also begun to defame some people. A problem that could inundate the OpenAi parent company with complaints, if it is not resolved as soon as possible.
To date, there are only two most striking cases, but they have affected a public official and a law professor, figures who for their roles have already threatened what could be the first lawsuits for defamation through artificial intelligence against OpenAi .
The former learned, from some constituents, that he was falsely identified by ChatGpt as guilty of an overseas bribery scandal involving a branch of the Reserve bank of Australia in the early 2000s.
In this case, the artificial intelligence error it is due to the fact that Hood actually worked for a branch of the bank, Note printing Australia. But what ChatGpt didn't understand is that Hood was the person who reported to the authorities the bribes made to foreign officials, to win currency printing contracts.
His lawyers, who confirmed not ever been accused of any crime, they sent a letter to OpenAi correcting the mistake on March 21, giving them 28 days before suing for defamation. But according to Reuters reports , no response has yet been received .
According to the artificial intelligence, the professor allegedly verbally harassed and attempted to sexually harass a female student during a class trip to Alaska, citing an article in the Washington Post of 2018 as a source. The problem is that such an article never existed and Turley never went on a trip to Alaska.
In contrast to Hood, Turley, who said he never molested anyone as reported by the Washington Post , seems to have not yet taken any action against OpenAi, but could do so if the error is not corrected as soon as possible.
To justify the incident, Niko Felix, spokesperson for OpenAi, stated that “when users subscribe to ChatGpt we try to be as transparent as possible about the fact that it could generate answers that are not always accurate ” , always reads the Washington Post .
To date, there are only two most striking cases, but they have affected a public official and a law professor, figures who for their roles have already threatened what could be the first lawsuits for defamation through artificial intelligence against OpenAi .
The mayor accused of corruption
It is Brian Hood , mayor of Hepburn Shire, a municipality near Melbourne, Australia, and Jonathan Turley, a law professor at George Washington University, a private university in Washington, United States.The former learned, from some constituents, that he was falsely identified by ChatGpt as guilty of an overseas bribery scandal involving a branch of the Reserve bank of Australia in the early 2000s.
In this case, the artificial intelligence error it is due to the fact that Hood actually worked for a branch of the bank, Note printing Australia. But what ChatGpt didn't understand is that Hood was the person who reported to the authorities the bribes made to foreign officials, to win currency printing contracts.
His lawyers, who confirmed not ever been accused of any crime, they sent a letter to OpenAi correcting the mistake on March 21, giving them 28 days before suing for defamation. But according to Reuters reports , no response has yet been received .
The teacher accused of harassment
The second, however, discovered the defamation thanks to a colleague of his, who asked ChatGpt to provide him with a list of law scholars guilty of harassing sexually someone . And in the list he also found Turley's name.According to the artificial intelligence, the professor allegedly verbally harassed and attempted to sexually harass a female student during a class trip to Alaska, citing an article in the Washington Post of 2018 as a source. The problem is that such an article never existed and Turley never went on a trip to Alaska.
In contrast to Hood, Turley, who said he never molested anyone as reported by the Washington Post , seems to have not yet taken any action against OpenAi, but could do so if the error is not corrected as soon as possible.
To justify the incident, Niko Felix, spokesperson for OpenAi, stated that “when users subscribe to ChatGpt we try to be as transparent as possible about the fact that it could generate answers that are not always accurate ” , always reads the Washington Post .