HAI – Stanford University: “New research shows we can only accurately identify AI writers about 50% of the time. Scholars explain why (and suggest solutions)…AI-generated text is increasingly making its way into our daily lives. Auto-complete in emails and ChatGPT-generated content are becoming mainstream, leaving humans vulnerable to deception and misinformation. Even in contexts where we expect to be conversing with another human – like online dating – the use of AI-generated text is growing. A survey from McAfee indicates that 31% of adults plan to or are already using AI in their dating profiles. What are the implications and risks of using AI-generated text, especially in online dating, hospitality, and professional situations, areas where the way we represent ourselves is critically important to how we are perceived?…The real concern, according to Jeff Hancock, is that we can create AI “that comes across as more human than human, because we can optimize the AI’s language to take advantage of the kind of assumptions that humans have. That’s worrisome because it creates a risk that these machines can pose as more human than us,” with a potential to deceive…”
Sorry, comments are closed for this post.