Most People Trust Accurate Search Results When the Stakes Are High
External story link
March 8, 2023
By Tom Fleischman, Cornell Chronicle
Human assumptions regarding language usage can lead to flawed judgments of whether language was AI- or human-generated, Cornell Tech and Stanford researchers found in a series of experiments.
While individuals’ proficiency at detecting AI-generated language was generally a toss-up across the board, people were consistently influenced by the same verbal cues, leading to the same flawed judgments.