On-Premise or Cloud - Where Should You Host Your AI Applications?
How much do you think we’re going to get tons of specialization versus no, once the brain gets big enough and we do these fine tunings, that’s going to be it. It will be like AWS, GCP, and Azure.
Ali: I think the answer is closer to the latter. It’s going to have lots of specialization. Having said that, it’s not a dichotomy in the sense that maybe ... See more
Ali: I think the answer is closer to the latter. It’s going to have lots of specialization. Having said that, it’s not a dichotomy in the sense that maybe ... See more
Ali Ghodsi • AI Food Fights in the Enterprise
📦 Service Deployment - Ray Serve (https://lnkd.in/eAV-Y6RN)
🧰 Data Transformation - Ray Data (https://lnkd.in/e7wYmenc)
🔌 LLM Integration - AIConfig (https://lnkd.in/esvH5NQa)
🗄 Vector Database - Weaviate (https://weaviate.io/)
📚 Supervised LLM Fine-Tuning - HuggingFace TLR (https://lnkd.in/e8_QYF-P)
📈 LLM Observability - Weights & Biases Tra... See more
🧰 Data Transformation - Ray Data (https://lnkd.in/e7wYmenc)
🔌 LLM Integration - AIConfig (https://lnkd.in/esvH5NQa)
🗄 Vector Database - Weaviate (https://weaviate.io/)
📚 Supervised LLM Fine-Tuning - HuggingFace TLR (https://lnkd.in/e8_QYF-P)
📈 LLM Observability - Weights & Biases Tra... See more
Feed | LinkedIn
A solution is to self-host an open-sourced or custom fine-tuned LLM. Opting for a self-hosted model can reduce costs dramatically - but with additional development time, maintenance overhead, and possible performance implications. Considering self-hosted solutions requires weighing these different trade-offs carefully.