Saving 10s of thousands of dollars deploying AI at scale with Kubernetes, with John McBride

KubeFM - A podcast by KubeFM

Categories:

Curious about running AI models on Kubernetes without breaking the bank? This episode delivers practical insights from someone who's done it successfully at scale.John McBride, VP of Infrastructure and AI Engineering at the Linux Foundation shares how his team at OpenSauced built StarSearch, an AI feature that uses natural language processing to analyze GitHub contributions and provide insights through semantic queries. By using open-source models instead of commercial APIs, the team saved tens of thousands of dollars.You will learn:How to deploy VLLM on Kubernetes to serve open-source LLMs like Mistral and Llama, including configuration challenges with GPU drivers and daemon setsWhy smaller models (7-14B parameters) can achieve 95% effectiveness for many tasks compared to larger commercial models, with proper prompt engineeringHow running inference workloads on your own infrastructure with T4 GPUs can reduce costs from tens of thousands to just a couple thousand dollars monthlyPractical approaches to monitoring GPU workloads in production, including handling unpredictable failures and VRAM consumption issuesSponsorThis episode is brought to you by StackGen! Don't let infrastructure block your teams. StackGen deterministically generates secure cloud infrastructure from any input - existing cloud environments, IaC or application code.More infoFind all the links and info for this episode here: https://ku.bz/wP6bTlrFsInterested in sponsoring an episode? Learn more.