Success Story

interLink Entered the CNCF Sandbox - Cloud-Native HPC Integration

INFN

The challenge of bridging cloud-native technologies with high-performance computing infrastructure has long been a barrier for organizations seeking to leverage their existing HPC investments for modern containerized workloads. Traditional HPC systems, optimized for batch processing through schedulers like Slurm, operate fundamentally differently from the dynamic, orchestrated environments that modern cloud-native applications expect. This architectural divide has prevented many research institutions and organizations from fully utilizing their substantial supercomputing resources for contemporary AI and cloud-native workflows.

The interLink project has successfully addressed this challenge, as confirmed by the recent milestone. It joined Cloud Native Computing Foundation (CNCF) Sandbox, marking a significant step toward democratizing access to HPC resources through cloud-native interfaces.

Image

The Challenge

Two Worlds Colliding

Organizations worldwide face a common dilemma: they have invested heavily in HPC infrastructure optimized for traditional scientific workloads, yet modern AI applications and cloud-native services are built around Kubernetes orchestration. The gap between these two paradigms has resulted in:

  • Resource underutilization: Powerful HPC systems sitting idle while teams deploy workloads to expensive cloud providers
  • Technical complexity: The need for specialized knowledge to bridge Kubernetes and HPC environments
  • Operational overhead: Managing separate infrastructure stacks for different types of workloads
  • Limited scalability: Inability to burst containerized workloads to available HPC resources when needed

Solution

interLink's Virtual Bridge

interLink provides an elegant solution by extending Kubernetes through Virtual Kubelet technology, creating a transparent bridge between container orchestration and HPC batch systems. The project enables organizations to:

  • Seamlessly offload Kubernetes pods to HPC resources without requiring changes to existing applications or HPC infrastructure. Users can submit standard Kubernetes workloads that are automatically translated into appropriate batch jobs for Slurm or other HPC schedulers.
  • Maintain operational simplicity through a plugin-based architecture that accommodates various backend integrations (not only HPC/SLURM) while preserving the familiar Kubernetes API experience for developers and data scientists.
  • Preserve existing security and resource management policies on HPC systems while adding modern multi-tenant capabilities and authentication layers that organizations expect from cloud-native platforms.
Image
CNCF Sandbox

The CNCF Sandbox is the entry point for early stage projects.

CNCF Sandbox: A Validation of Innovation

The acceptance of interLink into the CNCF Sandbox represents more than just a technical achievement—it’s a validation of the project’s approach to solving real-world infrastructure challenges. This milestone brings several immediate benefits:

  • Community Recognition: CNCF Sandbox status establishes interLink as a legitimate solution within the cloud-native ecosystem, increasing adoption confidence among enterprise users and research institutions.
  • Enhanced Visibility: The official domain launch at https://interlink-project.io provides a central hub for documentation, community resources, and project updates, making it easier for new users to discover and adopt the technology.
  • Collaborative Platform: Regular community calls create opportunities for users to showcase their implementations, share best practices, and influence the project’s roadmap based on real-world use cases.
  • Ecosystem Integration: Being part of the CNCF landscape opens doors for integration with other cloud-native projects and tools, expanding interLink’s capabilities and compatibility.

Real-World Impact: Beyond the Numbers

The true measure of interLink’s success lies in its practical applications across diverse use cases:

  • Research Institutions are now running AI inference workloads on their supercomputers, leveraging shared file systems to efficiently cache large language model weights across multiple deployments, dramatically reducing storage overhead and deployment times.
  • Scientific Computing Centers are providing researchers with familiar Kubernetes interfaces while utilizing their existing HPC infrastructure, eliminating the need for separate container orchestration clusters.
  • Enterprise Organizations are implementing hybrid cloud strategies that burst overflow workloads to their on-premises HPC systems, reducing cloud costs while maintaining performance for compute-intensive applications.

Looking Forward

Building the Community

The CNCF Sandbox achievement marks the beginning of an exciting growth phase for interLink. The project team is actively reaching out to current users to schedule presentation slots in upcoming community calls, creating opportunities for adopters to:

  • Share success stories and innovative use cases with the broader CNCF community
  • Build networks with other organizations implementing similar hybrid infrastructure strategies
  • Gain visibility for their innovative projects and technical achievements
  • Influence development by providing feedback on features and functionality that matter most to real-world deployments

Organizations currently working with interLink who haven’t heard from the team are encouraged to reach out directly to ensure their voices are included in these community sho

The Broader Implications

interLink’s success reflects a broader shift in how organizations think about infrastructure optimization. Rather than replacing existing investments, the project demonstrates how strategic integration can unlock new value from established systems. This approach resonates particularly well in the research and scientific computing communities, where HPC investments represent decades of expertise and significant capital.

The project’s open-source nature, combined with its CNCF affiliation, ensures that these benefits remain accessible to organizations of all sizes, from academic research groups to large enterprise computing centers.

Acknowledgments

This achievement would not have been possible without the strong support from the interTwin community and INFN (Istituto Nazionale di Fisica Nucleare), whose expertise in both particle physics computing and cloud-native technologies has been instrumental in shaping interLink’s development and validation.

The collaborative approach fostered by these partnerships exemplifies the best of open-source development, where diverse expertise combines to solve complex technical challenges that benefit the entire community.