persuasion
Content filtering doesn’t catch implicit deception. Safety guardrails don’t prevent fabricated intimacy if the AI isn’t saying anything explicitly harmful. Warning labels don’t help if users don’t understand that emotional manipulation is happening. The control mechanisms are fundamentally different depending on whether we’re addressing harm or... See more
When AI Learns to Manipulate: The Line Between Harm and Exploitation (Class #15)
So is this the real threat? Not that we’ll believe false things, but that we’ll stop being able to identify true things? Where truth becomes impossible to establish?
When Anyone Can Fake Anything (Inside My AI Law & Policy Class #14)
Together, these findings illuminate a key tension in emotional ly intelligent interfaces: they can evoke humanlike relational cues that increase engagement, but in doing so may blur the line between persuasive design and emotional coercion
File
AI agents will outmaneuver salespeople by optimizing persuasion
Ideas related to this collection