Seldon a platform for packaging and deploying machine learning models
This blog will talk about the overview of an MLOps tool Seldon and why one should know about Seldon and what are the benefits of using Seldon.
Why should you know about Seldon?
The biggest problem in the industry is productionizing the Machine Learning models due to a lot of moving parts it is difficult for the data scientist and DevOps engineer to work together and get things working. Data scientists lack understanding of cloud infrastructure and on the other hand, DevOps engineers lack requirements of Machine Learning models. To solve this Seldon provides a platform where these processes can be done at a high level even with less knowledge about their infrastructure by a Machine learning engineer or the new role MLOps Engineer.
Who should know about it?
Anyone who is trying to productionize the Machine Learning models is a Machine Learning Engineer or MLOps engineer.
The above flow explains the high-level flow of the usage of Seldon.
- Data Scientist prepares Machine learning model either by training the new model or fine-tuning the pre-trained model with data.
- The trained is saved in the storage location may be in any cloud storage as required.
- Then one has to create an inference class to read the model from the storage and provide a prediction.
- Now using Seldon core build a reusable model server image(docker image) and upload that to the registry.
- This model server provides REST or GRPC microservice for scoring.
- This can be deployed to the cluster for batch data or streaming data.
Why not use a flask or docker?
Seldon comes with some of these benefits which makes it a more suitable option to use. These benefits are:
- All the hard work is done in the Seldon platform.
- Seldon provides very complex inference graphs where one can do even A/B testing, break the inference into different modules, etc. The below figure shows where one can even set up a simple Seldon core Inference Graph or a complex Graph with Multi-Arm Bandit, outlier detection, and others.
3. Ease way to containerize Machine Learning models.
4. Ease of deployment into the cluster, even with less knowledge of infrastructure.
5. Seldon provides automated ingress configuration.
6. Advanced and customizable metrics with integration to Prometheus and Grafana
The next article will be on how to use Seldon to build microservices for the Machine Learning model.