AI
MLServer aims to provide an easy way to start serving your machine learning models through a REST and gRPC interface, fully compliant with KFServing's V2 Dataplane spec. Watch a quick video introducing the project here.
- Multi-model serving, letting users run multiple models within the same process.
- Ability to run inference in parallel for vertical sc
GitHub - SeldonIO/MLServer: An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
- Requirements (or constraints) : What does success look like? What can we not do?
- Methodology : How will we use data and code to achieve success?
- Implementation : What infrastructure is needed in production?
Real-time Machine Learning For Recommendations
Hi everyone! How do you guys go about choosing the granularity of your ML response ?. For instance, let us say you have been tasked with predicting the purchase probability for an item and this is how your merch hierarchy looks -
1) department
2) category
3) sub category
4) item
The trade off here is between granularity and response sparsity ie if you ... See more
1) department
2) category
3) sub category
4) item
The trade off here is between granularity and response sparsity ie if you ... See more
Discord - A New Way to Chat with Friends & Communities
ata Collection Experimentation Evaluation and Deployment Monitoring and Response Metadata Data catalogs, Amundsen, AWS Glue, Hive metas-tores Weights & Biases, MLFlow, train/test set parameter configs, A/B test tracking tools Dashboards, SQL, metric functions and window sizes Unit Data cleaning tools Tensorflow, ML-lib, PyTorch, Scikit-learn, X... See more
Shreya Shankar • "We Have No Idea How Models will Behave in Production until Production": How Engineers Operationalize Machine Learning.
Several engineers also maintained fallback models for reverting to: either older or simpler versions (Lg2, Lg3, Md6, Lg5, Lg6). Lg5 mentioned that it was important to always keep some model up and running, even if they “switched to a less economic model and had to just cut the losses.” Similarly, when doing data science work, both Passi and Jackson... See more
Shreya Shankar • "We Have No Idea How Models will Behave in Production until Production": How Engineers Operationalize Machine Learning.
engineers continuously monitored features for and predictions from production models (Lg1, Md1, Lg3, Sm3, Md4, Sm6, Md6, Lg5, Lg6): Md1 discussed hard constraints for feature columns (e.g., bounds on values), Lg3 talked about monitoring completeness (i.e., fraction of non-null values) for features, Sm6 mentioned embedding their pipelines with "comm... See more
Shreya Shankar • "We Have No Idea How Models will Behave in Production until Production": How Engineers Operationalize Machine Learning.
You have this classic issue where most researchers are evaluat[ing] against fixed data sets [. . . but] most industry methods change their datasets. We found that these dynamic validation sets served two purposes: (1) the obvious goal of making sure the validation set stays current with live data as much as possible, given new knowledge about the p... See more
Shreya Shankar • "We Have No Idea How Models will Behave in Production until Production": How Engineers Operationalize Machine Learning.
Amershi et al . [3] state that software teams “flight” changes or updates to ML models, often by testing them on a few cases prior to live deployment. Our work provides further context into the evaluation and deployment process for production ML pipelines: we found that several organizations, particularly those with many customers, employed a multi... See more
Shreya Shankar • "We Have No Idea How Models will Behave in Production until Production": How Engineers Operationalize Machine Learning.
“I look for features from data scientists, [who have ideas of] things that are correlated with what I’m trying to predict.” We found that organizations explicitly prioritized cross-team collaboration as part of their ML culture. Md3 said: We really think it’s important to bridge that gap between what’s often, you know, a [subject matter expert] in ... See more