kserve

KServe

go.dev reference Coverage Status Go Report Card OpenSSF Best Practices Releases LICENSE Slack Status Gurubase

KServe is a standardized distributed generative and predictive AI inference platform for scalable, multi-framework deployment on Kubernetes.

KServe is being used by many organizations and is a Cloud Native Computing Foundation (CNCF) incubating project.

For more details, visit the KServe website.

KServe

Why KServe?

Single platform that unifies Generative and Predictive AI inference on Kubernetes. Simple enough for quick deployments, yet powerful enough to handle enterprise-scale AI workloads with advanced features.

Features

Generative AI

Predictive AI

Learn More

To learn more about KServe, how to use various supported features, and how to participate in the KServe community, please follow the KServe website documentation. Additionally, we have compiled a list of presentations and demos to dive through various details.

:hammer_and_wrench: Installation

Standalone Installation

Kubeflow Installation

KServe is an important addon component of Kubeflow, please learn more from the Kubeflow KServe documentation. Check out the following guides for running on AWS or on OpenShift Container Platform.

:flight_departure: Create your first InferenceService

:bulb: Roadmap

:blue_book: InferenceService API Reference

:toolbox: Developer Guide

:writing_hand: Contributor Guide

:handshake: Adopters

Star History

Star History Chart