navigating the latent space of colors 🌈✨
do LLMs “see” colors differently from humans?
we perceive colors through wavelengths while LLMs rely on semantic relationships between words. to explore this, i mapped two color spaces: https://t.co/JSHCFjYu61
Don't use Sci-Hub — it's a "controversial" website with 84M+ research papers freely available.
We should try to make billion-dollar academic publishers richer.
Here's an updated thread on integrating Sci-Hub with Zotero to get free papers.
Please don't do this😉
"We hope that such tools may help us to gain novel insight into the psychology of an understudied pool of humans—namely, the dead"
Overview of work on HLLMs - language models trained on historical texts to simulate historical attitudes and perspectives. https://t.co/joGjm7brgs https://t.co/lg59eRS1So
Cool experiment where researchers assemble an AI translation “company” with AI agents with simulated backgrounds filling various roles, from editors to proofreaders.
The AI “company” creates accurate translations of Chinese web novels that people prefer to GPT-4, and human, ones https://t.co/7lxg2jEjZi
We just shipped two new features to our console that I think are going to completely change prompt engineering with Claude.
-Prompt generator: Claude writes your prompts for you.
-Variables: Easily inject external info into your prompt.
Let’s walk through how they work in five... See more
Chat with LLMs to analyze your internal data 🤯
Simply connect your database to SOTA LLMs like GPT-4o, Claude Opus, Google Gemini or Llama-3 and generate for on-the-fly dashboards.
Best part? Access all of these LLMs in a single playground with just $10 a month. https://t.co/LOiJvXvo9K
🚨BREAKING: US @NIST publishes 1st draft of its "AI Risk Management Framework: Generative AI Profile." Important information & quotes:
➡️This is a comprehensive document that contains an overview of risks unique to or exacerbated by generative AI (GAI) and an extensive list of actions to manage GAI's... See more
Instead of treating AGI as a binary threshold, I prefer to treat it as a continuous spectrum defined by comparison to time-limited humans.
I call a system a t-AGI if, on most cognitive tasks, it beats most human experts who are given time t to perform the task.
More details: