CNET has clarified its policy for using artificial intelligence in shaping content, saying that no stories will be fully written by AI.
The channel’s AI platform, RAMP (Responsible AI Machine Partner) may be utilized for generating explanatory material that will be fact-checked and edited by humans.
But none of the stories on CNET “have been or will be completely written by an AI,” CNET states.
It adds, “If that changes, as technology and our processes evolve, we will disclose it here.”
Nor will CNET use AI for hands-on product testing or reviews.
And, as of now., CNET is not publishing “AI-generated images except as examples of AI capabilities in our coverage of tools currently on the market,” it writes.
In general, CNET enunciates two guiding principles for utilizing AI:
“One, every piece of content we publish is factual and original, whether it's created by a human alone or assisted by our in-house AI engine, which we call RAMP. (It stands for Responsible AI Machine Partner.)”
It adds, “If and when we use generative AI to create content, that content will be sourced from our own data, our own previously published work, or carefully fact-checked by a CNET editor to ensure accuracy and appropriately cited sources.
Two, “creators are always credited for their work,” CNET continues. “The use of our AI engine will include training on processes that prioritize accurate sourcing and include standards of citation.”
Earlier this year, having tested AI, CNET discovered factual errors in an and retooled tits process.
For one thing, CNET has enhanced the program with more accurate checking for plagiarism. And it will offer “quality and AI likeness scores for all generations,” Cameron Hurta, editor of AI-powered content for owner Red Ventures, said to staff, Futurism reported.