Co-Intelligence
Consider the oldest, and most famous, test of computer intelligence: the Turing Test. It was proposed by Alan Turing, a brilliant mathematician and computer scientist widely regarded as the father of modern computing. Turing was fascinated by the question, Can machines think? He realized this question was too vague and subjective to be answered
... See moreEthan Mollick • Co-Intelligence
AI systems also make mistakes, tell lies, and hallucinate answers, just like humans. Each system has its own idiosyncratic strengths and weaknesses, just like each human colleague does. Understanding these strengths and weaknesses requires time and experience working with a particular AI. The abilities of AI systems range widely, from middle-school
... See moreEthan Mollick • Co-Intelligence
AI excels at tasks that are intensely human.
Ethan Mollick • Co-Intelligence
AI doesn’t act like software, but it does act like a human being. I’m not suggesting that AI systems are sentient like humans, or that they will ever be. Instead, I’m proposing a pragmatic approach: treat AI as if it were human because, in many ways, it behaves like one.
Ethan Mollick • Co-Intelligence
Principle 4: Assume this is the worst AI you will ever use.
Ethan Mollick • Co-Intelligence
To make the most of this relationship, you must establish a clear and specific AI persona, defining who the AI is and what problems it should tackle. Remember that LLMs work by predicting the next word, or part of a word, that would come after your prompt. Then they continue to add language from there, again predicting which word will come next. So
... See moreEthan Mollick • Co-Intelligence
Anthropomorphism is the act of ascribing human characteristics to something that is nonhuman.
Ethan Mollick • Co-Intelligence
Principle 3: Treat AI like a person (but tell it what kind of person it is).
Ethan Mollick • Co-Intelligence
Moreover, the human-in-the-loop approach fosters a sense of responsibility and accountability. By actively participating in the AI process, you maintain control over the technology and its implications, ensuring that AI-driven solutions align with human values, ethical standards, and social norms.