GPU-accelerated databases are mind-blowing!
Imagine a database natively integrated with best-in-class AI foundational models:
• Zero warmup latency
• Massive GPU-backed scalability
• Ability to process your data with any model
• Ability to train... See more
Chat with LLMs to analyze your internal data 🤯
Simply connect your database to SOTA LLMs like GPT-4o, Claude Opus, Google Gemini or Llama-3 and generate for on-the-fly dashboards.
Best part? Access all of these LLMs in a single playground with just $10 a month. https://t.co/LOiJvXvo9K
I think if you care about the debates over ethics & AI, it is worth reading the OpenAI Model Spec.
It is the first comprehensive attempt to actually lay out the practical principles under which AI operates, by a lab actually building them. Lots there. https://t.co/pca263IXzZ https://t.co/BTLGmXmngD
A couple of GPTs to get started with AI at work
This one helps you figure out what work tasks an AI "intern" may be able to help you with: https://t.co/ezgXgNlZEa
This one, given the interaction from the previous, creates a prompt to make your AI intern: https://t.co/A3I7Zl0Z9O... See more
navigating the latent space of colors 🌈✨
do LLMs “see” colors differently from humans?
we perceive colors through wavelengths while LLMs rely on semantic relationships between words. to explore this, i mapped two color spaces: https://t.co/JSHCFjYu61
Why Chat With PDF Is Hard And How ChatLLM Gets It Right
Chatting on long docs is hard because most LLMs other than Gemini don't have a large context.
However, even with Gemini's 1M context length, in-context learning is hard, and if you stuff the doc in the context, it doesn't do a good job.... See more
Google just released A Personal Health Large Language Model (PH-LLM), a version of Gemini fine-tuned for personal health and wellness.
When I used to race triathlon at a competitive level I was collecting so much data: sleep data, workout metrics, professional examinations, behaviour tracking, etc.... See more