Posted 8 months, 1 week ago
Roles
MLE Full-stack SWECompensation Summary
We pay well and offer significant equity.
Locations
San Francisco, CA
Contacts
sam@kuzco.xyz
Description
We're building a serverless LLM inference network that makes use of underutilized capacity from GPU data centers. Our product is a scheduler for running LLM inference workloads on GPUs located all over the world. We currently have over 6,000 GPUs on our network, and are growing quickly. Things we need help with: Improve core scheduling algorithms, Optimize vLLM inference runtime, Improve logging and observability stack, Building our user-facing dashboard and APIs, New products. We're well-funded and have a clear path to profitability. We're currently a four person team of staff-level engineers and looking to add two more engineers to our team. Our office is near the Ferry Building in downtown San Francisco.
Similar Jobs
Create your own personalized Job Alert