ML Model Deployment Strategy
If you want to deploy your ml model which consumes by rest API powered by the flask. Do you prefer self-deployment or managed services?
Consider the below point while finalising the decision.
- Your ml model size is approximately 600 MB.
2. Model update happening 2 times a week.
3. Serve close to 1 lac requests/day.
4. After installing all libraries in the Python virtual environment application size is close to 1.5 GB majority of space is taken by tensor flow and a few more libraries.
5. It should be cost-effective and scalable whenever required.
Currently have deployed on digital ocean droplets, we can deploy on EC2 as well because I dint found any managed services which can help me with such deployments.
Have tried Python anywhere or Heroku even the DO app platform but non of them worked, all are suggesting taking a bigger machine subscription which is quite costly.
Let me know your views.