Salad - GPU Cloud | 10k+ GPUs for Generative AI
Priceless AI
pricelessai.comsari added
The human-centric platform for production ML & AI
Access data easily, scale compute cost-efficiently, and ship to production confidently with fully managed infrastructure, running securely in your cloud.
Access data easily, scale compute cost-efficiently, and ship to production confidently with fully managed infrastructure, running securely in your cloud.
Infrastructure for ML, AI, and Data Science | Outerbounds
Nicolay Gerold added
dstack is an open-source toolkit and orchestration engine for running GPU workloads. It's designed for development, training, and deployment of gen AI models on any cloud.
Supported providers: AWS, GCP, Azure, Lambda, TensorDock, Vast.ai, and DataCrunch.
Latest news ✨
Supported providers: AWS, GCP, Azure, Lambda, TensorDock, Vast.ai, and DataCrunch.
Latest news ✨
- [2024/01] dstack 0.14.0: OpenAI-compatible endpoints preview (Release)
- [2023/12] dst
dstackai • GitHub - dstackai/dstack: dstack is an open-source toolkit for running GPU workloads on any cloud. It works seamlessly with any cloud GPU providers. Discord: https://discord.gg/u8SmfwPpMd
Nicolay Gerold added
They will start to support autoscaling in March. You can configure multiple clouds and they deploy to the cheapest one.
Grass: Earn A Stake in the AI Revolution
getgrass.ioMark Fishman added
Crowdsourced compute, “passive income”
Sonya Huang • Generative AI’s Act Two
Darren LI added
Create stunning visuals in seconds with AI.
clipdrop.cosari and added
Announcing Together Inference Engine – the fastest inference available
November 13, 2023・By Together
The Together Inference Engine is multiple times faster than any other inference service, with 117 tokens per second on Llama-2-70B-Chat and 171 tokens per second on Llama-2-13B-Chat
Today we are announcing Together Inference Engine, the world’s fast... See more
November 13, 2023・By Together
The Together Inference Engine is multiple times faster than any other inference service, with 117 tokens per second on Llama-2-70B-Chat and 171 tokens per second on Llama-2-13B-Chat
Today we are announcing Together Inference Engine, the world’s fast... See more
Announcing Together Inference Engine – the fastest inference available
Nicolay Gerold added