CNET apparently is ready to give AI another try for writing content.
Having experienced factual errors in an earlier test, CNET has retooled the process, according
to Futurism.
For one thing, CNET has enhanced the program with more accurate checking for plagiarism. And it will offer “quality and AI likeness scores for all
generations,” Cameron Hurta, editor of AI-powered content for owner Red Ventures, said to staff, Futurism reports.
Hurta explained that the likeness scores will
“help us ensure that our content still reads human-like, and is not going to get tagged as being AI content," Futurism continues.
Earlier this month, CNET
Editor in Chief Connie Guglielmo wrote in blog post that CNET had launched “a test using an internally designed AI engine –not ChatGPT—to help editors create a set of
basic explainers around financial services topics.”
advertisement
advertisement
Someone cited a factual error (“rightly,” Guglielmo acknowledged) and the team performed a full audit. Some stories
required “correction, with a small number requiring substantial correction and several stories with minor issues such as incomplete company names, transposed numbers or language that our senior
editors viewed as vague,” Guglielmo reported.
It is not yet clear whether CNET’s AI content will be marked as such going forward.