AI Company Accused of Using Humans to Fake Its AI
It’s a common fear that artificial intelligence could steal our jobs, but in the case of one of China’s leading voice recognition companies, it might be more of a case of humans taking work from AI.
On Friday, iFlytek was hit with accusations that it hired humans to fake its simultaneous interpretation tools, which are supposedly powered by AI.
In an open letter posted on Quora-like Q&A platform Zhihu, interpreter Bell Wang claimed he was one of a team of simultaneous interpreters who helped translate the 2018 International Forum on Innovation and Emerging Industries Development on Thursday. The forum claimed to use iFlytek’s automated interpretation service.
While a Japanese professor spoke in English at the conference on Thursday morning, a screen behind him showed both an English transcription of what he was saying, and what appeared to be a simultaneous translation into Chinese which was credited to iFlytek. Wang claims that the Chinese wasn’t a simultaneous translation, but was instead a transcription of an interpretation by himself and a fellow interpreter. “I was deeply disgusted,” Wang wrote in the letter.
In the open letter, Wang pointed to two examples to support his claim. First, iFlytek’s tool appeared to struggle with the Japanese professor’s English accent, rendering “Davos Forum” as “Devil’s Forum.” Despite the transcription error, the Chinese translation came out correct — just as it had in Wang’s partner’s translation. Second, the majority of the Chinese translation was the same as what Wang and his partner had interpreted, including a tricky conjunction which Wang believes would have been translated literally had it really been translated by AI. Despite his concerns, Wang said that he didn’t confront the forum organizer or iFlytek staff at the forum.
In a post on Friday on social media platform Weibo, iFlytek’s CEO Hu Yu said the tool used in the conference was a transcribing tool — not a translation tool. Earlier, financial news outlet Securities Times reported that in response to the letter, iFlytek had directed media to remarks that the company’s president Liu Qingfeng made on Monday, when he said: “Currently machines still cannot replace interpreters. Frankly, a combination of human and machine is where we’re headed.” When contacted by Sixth Tone on Friday, iFlytek’s public relations officer denied that Liu’s words constituted an official response and said that the company is currently investigating the accusation. Sixth Tone also reached out to the forum organizers, but received no response.
In a video of the forum, an artificial voice read out Wang and his partner’s interpretation, according to Wang. Wang believes that iFlytek must have transcribed the interpreting they had done in the morning and used a robot to read it, just to prove their AI worked. “This is obviously a scam,” Wang wrote.
The livestream of the event was no longer available by Friday afternoon. Wang had not responded to Sixth Tone’s interview request by time of publication. Following the open letter, other interpreters came forward, claiming that they had been offered work interpreting on iFlytek’s behalf.
This is not the first time iFlytek has been accused of disguising works done by flesh-and-blood interpreters as the work of their AI-powered product. Last year, another simultaneous interpreter accused iFlytek of hiding their existence from the keynote speakers while providing interpretation services that appeared to come from the AI product.
Additional reporting: Liya Fan; Editor: Julia Hollingsworth.
(Header image: A Japanese professor gives a speech on IFLYT’s AI tech in Shanghai, Sept. 20, 2018. @bellwang fro Zhihu)