In 2019, OpenAI announced GPT-2 with this post:
https://t.co/jjP8IXmu8D
Today (~5 years later) you can train your own for ~$672, running on one 8XH100 GPU node for 24 hours. Our latest llm.c post gives the walkthrough in some detail:
https://t.co/XjLWE2P0Hp... See more
Chat with LLMs to analyze your internal data 🤯
Simply connect your database to SOTA LLMs like GPT-4o, Claude Opus, Google Gemini or Llama-3 and generate for on-the-fly dashboards.
Best part? Access all of these LLMs in a single playground with just $10 a month. https://t.co/LOiJvXvo9K
Cool experiment where researchers assemble an AI translation “company” with AI agents with simulated backgrounds filling various roles, from editors to proofreaders.
The AI “company” creates accurate translations of Chinese web novels that people prefer to GPT-4, and human, ones https://t.co/7lxg2jEjZi
GPU-accelerated databases are mind-blowing!
Imagine a database natively integrated with best-in-class AI foundational models:
• Zero warmup latency
• Massive GPU-backed scalability
• Ability to process your data with any model
• Ability to train and fine-tune models on your data
There are 1.7 million deployments of PostgreSQL worldwide, one o... See more
Don't use Sci-Hub — it's a "controversial" website with 84M+ research papers freely available.
We should try to make billion-dollar academic publishers richer.
Here's an updated thread on integrating Sci-Hub with Zotero to get free papers.
Please don't do this😉
1/ I finally read Leopold Aschenbrenner's essay series on AI: Situational Awareness
Everyone, regardless of your interest in AI, should read this.
I took notes, they're sloppy but figured I'd share.
Welcome to the future: https://t.co/8WmDNvylrz