Home >> Blog – EN >> KubeCon Paris 2024 – Artificial Intelligence with Kubernetes and the Future of Cloud Native

KubeCon Paris 2024 – Artificial Intelligence with Kubernetes and the Future of Cloud Native

07 April 2024

By Yann Albou.

An AI-themed kubecon celebrating 10 years of Kubernetes!

What an event this year! The largest in the history of the Cloud Native Computing Foundation (CNCF)! with more than 12,000 people!
Even in 3 days of conference, it was difficult to go around, a lot of conferences, sponsors, discussions, sponsor evenings, …
Already more than 10 years of Kubernetes!
On this occasion, the CNCF suggests that you submit your Kubernetes 10th anniversary logo!: https://www.linuxfoundation.org/kubernetes-10-year-logo-contest

Unsurprisingly, the major theme of Kubecon was Artificial Intelligence in Cloud Native mode with Kubernetes, but not only that!
Here is a non-exhaustive summary of the conferences that interested me, grouped by theme:

  • Artificial Intelligence: “Kubernetes must become the standard for AI”
  • Supply Chain: a unique platform!
  • WASM: The WebAssembly trend
  • Sustainability: responsible innovation
  • Infrastructure-as-Code: The Invisible Omnipresent!
  • Observability and OpenTelemetry
  • Platform Engineering: feedback
  • Kubernetes: obviously!
  • Security: remains a major subject

Artificial Intelligence: “Kubernetes must become the standard for AI”

The first keynote by Priyanka Sharma (Executive Director, Cloud Native Computing Foundation) opened with Artificial Intelligence.
Cloud Native will not escape this AI trend but clearly needs standards and tools.
Clearly, all of this is on the way and will play an important role in this acceleration of AI with the sharing of models, processes and know-how:

“Kubernetes must become the AI standard”, Priyanka Sharma.

Training & Inference

Training is the first phase of an AI model. “Training” can involve a process of trial and error, or a process of showing the model examples of input and output.
The inference phase represents the execution of a model once it has already been trained. Thus, inference occurs after the learning stages. We can also talk about the field deployment phase of the AI model. And so, at this stage, the artificial intelligence model is already pre-calculated and has already been modeled using data sets. So, at this point, artificial intelligence systems can draw conclusions or predictions from the data or knowledge they have learned.

Kubernetes and the CNCF are now positioned across the board and use the orchestration platform both to manage the lifecycle of Machine Learning (AI/ML/LLM Ops) and to run models and applications.
A new CNCF working group, the Cloud Native AI published a white paper on overviews of recent AI/ML techniques.

Introduction to the Cloud Native Artificial Intelligence (CNAI) initiative:
Broken Chain

This trend leads to developments in Kubernetes in particular on:

  • GPU support: Particularly with Nvidia drivers
  • New GPU sharing techniques to maximize its usage
  • Dynamic Resource Allocation (DRA): New way to request available resources in k8s 1.26+
  • Appearance of new Kubernetes tools/operators: example with KAITO: Kubernetes AI tools operator which simplifies the experience of running OSS AI models on your AKS clusters
  • Optimization using local LLMs: Ollama

Self-Hosted LLMs on Kubernetes: A Practical Guide – Hema Veeradhi & Aakanksha Duggal, Red Hat

Large language models (LLMs) are quickly gaining popularity and the idea of deploying and managing your own LLM can be very interesting, especially in terms of security, confidentiality, and customization. This talk walks us through the process towards a clear and practical understanding of self-hosting LLMs on Kubernetes.

Self-Hosting: Advantages and Considerations

  • For or against self-hosting? It depends on specific use cases. Hosted models may be necessary for privacy, reliability, or compliance reasons.
  • Advantages of self-hosting:
    • Increased security, privacy and compliance.
    • Advanced customization.
    • Avoids vendor lock-in.
    • Reduced computational costs.
    • Easy for beginners or those just getting started in the field.


Hosted LLM

Here is the list of Opensource LLMs
and a paper on “tracking openness of instruction-tuned LLMs

Watch the video of this session

Future of Intelligent Cluster Ops: LLM-Azing Kubernetes Controllers – Rajas Kakodkar, VMware & Amine Hilaly, AWS

Another way to use AI and LLMs is to use language models to improve operations on Kubernetes. They integrated LLMs with CRDs and Kubernetes controllers via an operator LLMnetes to simplify cluster management through a natural language interface resembling:

  • Scan images for pods and deployments
  • Break my cluster load balancers
  • Deploy a CronJob that will delete pod randomly in the cluster every 2 hours

This project wants to go further with questions like “Can I upgrade my cluster to 1.29?” which will require analyzing the audit logs, cluster resources and reading the changelog.md of the k8s versions.

Warning: LLMs are not deterministic! ! !

As mentioned: Avoid querying a machine learning model for tasks that require precise data.

Watch video of this session

Supply Chain: a unique platform!

Build and delivery remain strong trends at KubeCon with developments to take into account MLOps but also many conferences on the security aspects of the Supply Chain.

The keynote by Solomon Hykes (CEO, Dagger.io) on the “Future of Application Delivery in a Containerized World” retraced the 10 years of evolution of Kubernetes with a strong message on the continuous improvement of the software factory, which must serve your needs and be a central place to take your business to the next level as a key differentiator:

Every Factory is Unique

Also considering the factory as a platform (here again, platform engineering plays an essential role – more on this below).

SLSA: Supply-chain Levels for Software Artifacts

Several very interesting conferences around best practices to protect yourself as much as possible from “Supply Chain Attacks”. We wrote a blog on this topic “How to protect yourself from a “Supply Chain Attack”?”.

In particular the security framework SLSA was highlighted:

SALSA (in version 1.0) defines the guidelines for supply chain security with different levels and contains several “Tracks” in particular the “Build track” which focuses on provenance (from source to artifact):

Chain of Trust ?

Scanning our artifacts for vulnerabilities is an excellent thing but not enough, we must also guard against problems of falsification of all elements.

So how can we increase the level of trust in our software supply chains? And what are these approaches based on:

  • Attestation: An authenticated statement (via metadata) about a software artifact or collection of software artifacts
  • Signature: guarantees that the artifacts have not been modified or compromised during their distributions.
  • SBOM (Software Bill of Materials): is basically a detailed inventory of all software components in a software product. This includes libraries, modules, packages and other dependencies.

Here are some tools used:

  • In-toto: To manage and generate certificates
  • Sigstore/cosign: For signing software, attestations, SBOMs and other metadata
  • SPIFFE/Spire: to build workload identities
  • [TUF](The Update Framework): to provide a framework that can be used to secure new and existing software update systems as well as providing the means to minimize the impact of key compromises.
  • Vault: For secret and PKI management
  • Docker BuildKit: to generate the attestations (in in-toto json format) and the sbom
     docker buildx build --sbom=true --provenance=true ...

There are really a lot of tools to secure this CI/CD:

SLSA Tooling

The CNCF is working on a document to map these tools

Also don’t forget that you need to secure your source code! By signing your commits, protecting your branches, having a suitable RBAC, authentication via MFA, use of SSH keys…

To demonstrate the whole CI/CD Tekton with the Tekton Pipeline was often used.

Also note the existence of the project FRSCA a “Factory for Repeatable Secure Creation of Artifacts: FRSCA” which provides tools and an implementation of the CNCF project Secure Software Factory Reference Architecture and which follows SLSA recommendations.

Supply Chain: The maturity of container construction and deployment

CI/CD practices around containers are strengthening and gaining maturity. Three conferences particularly caught my attention:

Building Container Images the Modern Way – Adrian Mouat, Chainguard

The Building Docker Images the Modern Way conference compared several approaches to building Docker images.
Starting the “old way” with a Docker Build (which produces images that are not reproducible, too large and with a lot of CVEs)

FROM golang
COPY . /work
RUN go build -o hello ./cmd/server
ENTRYPOINT ["/work/hello"]

To move towards a “Distroless Multistage Docker Build” approach so as to have only the bare minimum in the image without a package manager or shell…

Distroless images are built with

  • KB build: Specific to Go images
  • Bazel: By Google, and allows you to produce good distroless images but remains complex to understand.
  • Chiselled Containers; by Canonical based on Ubuntu for Java and .Net
  • Buildpacks: CNCF project using a toml language to produce images (less lightweight than the others)
  • Dagger with Buildkit: CI/CD project much broader than image building
  • Nix: allows you to produce good distroless images but difficult to navigate: too many ways of doing things.
  • Apko & wolfi: by Chainguard also allows you to produce good distroless images but in a simpler way:
     - https://dl-cdn.alpinelinux.org/alpine/edge/main
     - alpine-base
    command: /bin/sh -l
    PATH: /usr/sbin:/sbin:/usr/bin:/bin

Misconfigurations in Helm Charts: How Far Are We from Automated Detection and Mitigation? – Francesco Minna, Vrije Universiteit Amsterdam & Agathe Blaise, Thales SIX

In this session, they presented several tools for checking Helm chart compliance, including Checkov, Datree , KICS, Kubelinter, Kubeaudit, Kubescape and Terrascan.
These tools test and examine whether Helm charts comply with best practices, particularly when it comes to security.

Helm Chart tools

The following repository contains a list of public Chart Helm and was used to test the tools: https://github.com/fminna/mycharts

Several interesting points were raised:

  • Most projects require additional Linux permissions or capabilities
  • The most common configuration errors are ClusterRoles, memory limit, NS default

Misconfigurations in Helm Charts

Regarding the tools, the feedback is:

  • Chart Helm analyzers still require significant manual intervention
  • The same attenuation can satisfy one tool but not another (rules that contradict each other)
  • False negative/positive results

Watch the video of this session

GitOps Continuous Delivery at Scale with Flux – Stefan Prodan

Following the announcement on Weavework announcing that the company was stopping its activity, we could seriously worry about the continuation of the GitOps product Flux.

This presentation was reassuring about the future of this product:

  • Many features are moving to GA soon
  • New features planned: Notary Project integrations, CDEvents integrations, Helm OCI improvements, …
  • Lots of work on optimization: “Mean Time To Production” benchmark, Horizontal scaling, Flux sharding for multi-cluster delivery, …

If we add to this the fact that Flux is a CNCF project and which moved to the “Graduated” maturity level on November 30, 2022, this allows us to guarantee a certain sustainability of the solution!

Watch the video of this session

WASM: The WebAssembly trend

This is not a new technology but it was strongly present during KubeCon and in particular during the first keynotes.

WebAssembly is a binary instruction standard format designed to replace JavaScript with superior performance. The standard consists of bytecode, its textual representation, and a sandboxed execution environment compatible with JavaScript. It can be run in and out of a web browser
Many programming languages today have a WebAssembly compiler, including: Rust, C, C++, C#, Go, Java, Lua, Python, Ruby, …

WebAssembly is very efficient, fast and secure which makes it compatible with the Cloud Native world.

Leveling up Wasm Support in Kubernetes – Matt Butcher, Fermyon

This conference explores the use of WebAssembly in Kubernetes through a tool SpinKubepin

The strengths of WebAssembly:

  • Security sandbox
  • Cross-platform, cross Architecture
  • Very fast startup and execution
  • Can support any language

Fermyon positions WASM as an evolution of containers for adapted use cases:

In particular, for the Serverless pattern: Containers are excellent for running long-running processes, but have a WASM type technology for Serverless that can cold start instantly, run to the end and stop.

How does it work:

  • Spin is an open source project for building Serverless WebAssembly applications
  • SpinKubepin: Runs Spin applications in Kubernetes

Spinkube contains all the elements necessary to run WASM Spin applications in k8s based on the runwasi projects (for the shim part of containerd) and Kwasm for the operator part.

WASM Spin Architecture

There was clearly a strong interest in WASM and SpinKube due to the performance and its integration into the Cloud Native world:
WASM Spin Performance

Watch the video of this session

Sustainability: responsible innovation

We’re talking about Artificial Intelligence but have we forgotten last year’s theme on sustainability and eco-responsibility?

The keynote called for “responsible innovation”: keep innovating, but do it using open source and with cost and energy savings in mind.

Projects like WASM are moving in this direction, and LLM models are entering a cycle of optimization of their model, is this enough?

Building IT Green: A Journey of Platforms, Data, and Developer Empowerment at Deutsche Bahn

This session shows real business use cases on sustainability at Deutsche Bahn.

They first carried out an optimization between the average use of the machine CPU and the real CPU used by the application by forcing the use of the definition of requests and limits as well as the use of VPA (VerticalPodAutoscaler).
Then they used schedulers to reduce or stop workloads that are not used during non-working or less active hours…
In addition to saving a lot of energy it also saves money.

The measurement of energy costs is done with the Kepler project:

This makes it possible to measure impacts by project or even measure the impact of one change to another, and thus have a global and target carbon footprint.
These dashboards are now part of the life of projects and constitute information that can influence certain decisions…

Watch the video of this session

Cloud Native Sustainability Efforts in the Community – TAG Environmental Sustainability – Antonio Di Turi, Data Reply Gmbh & Kristina Devochko, Independent

The TAG Environmental Sustainability supports projects and initiatives related to the delivery of cloud-native applications, including their creation, packaging, deployment, management and their exploitation.

alt text

The objectives of this group are:

  • Provide resources through a review process and methodology to help CNCF projects assess their sustainability footprint and integrate environmental sustainability practices.
  • Collaborate with CNCF project managers to produce meaningful analyzes of the sustainable footprint.
  • Communicate the results of cloud native sustainability assessments of projects.

The SCI score will help calculate the carbon intensity of software:
SCI score

And the goal being to produce these metrics:
Sustainability metrics

The Falco project has requested its first “Green” review and other CNCF projects will follow.

Watch the video of this session

Observability and OpenTelemetry

Lots of tracks on observability with a particular focus on Opentelemetry but also on AI.

OpenTelemetry: Project Updates, Next Steps, and AMA – Severin Neumann, Cisco; Austin Parker, Honeycomb; Trask Stalnaker, Microsoft; Daniel Gomez Blanco, Skyscanner; Alolita Sharma, Apple

The OpenTelemetry project is everywhere and widely used


Just in the month before this KubeCon there were 25,000 contributions!!!
In short, its popularity is no longer in doubt.

News on the logging side:

  • Stable logging SDK: C++, .Net, Java and PHP
  • Experimental logging SDK: Go, JS, Python, Rust, Erlang/Elixir

OpenTelemetry defines “Semantic Conventions” which specify common names for different types of operations and data, which helps bring standards to the codebase, libraries and platforms.
Semantic Conventions are available for traces, metrics, logs and resources, all in a stable version for the HTTP part.
And work in progress on the Databases, Messaging, RPC, System parts and the new latest AI/LLM!!!

Extension of OpenTelemetry signals on the “Client-side” part and not only at the application and infrastructure level.

Definition of a new “Entity” concept to better represent the telemetry producer.

The OpenTelemetry project has applied to be a CNCF “Graduated” project and thus demonstrate its stability and maturity.

See the video of this session

Observability TAG: A Review and the Rise of Gen-AI Observability – Alolita Sharma & Matt Young, Apple; Vijay Samuel, eBay; Bartłomiej Płotka, Google

The “Observability TAG” working group aims to enrich the ecosystem of projects related to observability in cloud-native technologies, by identifying gaps, sharing best practices, educating users and CNCF projects, while providing a neutral space for discussion and community involvement.

Observability Whitepaper is available.
It focuses on improving observability in cloud-native systems, providing an overview of observability signals, such as metrics, logs and traces, and discusses best practices, challenges and potential solutions. It aims to clarify the concept of observability, providing valuable insights for engineers and organizations seeking to improve the reliability, security and transparency of their cloud-native applications and infrastructures.

The TAG observability co-sponsored the Cloud Native AI workgroup (see the AI chapter above) and reviewed several projects around AI:

  • OpenLLMetry: set of extensions built on OpenTelemetry that give you full observability on your LLM application.
  • K8sGPT: tool to analyze your Kubernetes clusters, diagnose and triage issues in plain English
  • Logging Operator: It solves problems related to logging in Kubernetes environments by automating the deployment and configuration of a Kubernetes logging pipeline.

There is clearly an “Observability + Gen AI” trend in order to:

  • Detect an anomaly
  • Analyze trends
  • Understanding of Distributed Tracing
  • Data quality
  • Root cause analysis
  • Suggestions for next steps

Observability + Gen AI

Watch the video of this session

Infrastructure-as-Code: The Invisible Omnipresent!

IaC was not a trend at the conference, there was no session dedicated to, for example, Terraform or Ansible.
On the other hand, in most conferences these Infra-as-Code tools were used as a basic foundation!

In addition, there are still 2 notable points in this area:

The first concerns the multiple exchanges and informal discussions around Terraform following the change of license by HashiCorp (Business Source License: BSL v1) which led to a fork of the 1.5 release of Terraform to create the project [OpenTofu](https:/ /opentofu.org/)!

  • What will Hashicorp’s positioning be?
  • Should we migrate our Terraform projects to OpenTofu?
  • Will the versions diverge?
  • Will there be Terraform <-> OpenTofu compatibility?
  • What about Support?
  • Can the OpenSource model survive by making money?

Terraform is strongly established in companies and projects and depending on the evolution of these 2 projects, this could lead to a lot of change… To be continued

The second point concerns the strong presence of Crossplane which is seriously gaining popularity and which is, without a doubt, a very interesting project.

Crossplane Intro and Deep Dive – the Cloud Native Control Plane Framework – Jared Watts & Philippe Scorsolini, Upbound

Crossplane is an IaC tool that defines itself as a “Cloud Native control plane” to provision and manage all your resources based on Kubernetes!

It is a project very well established in the CNCF eco-system and above all production ready:
Crossplane IaC

All resources are managed from Kubernetes through CRD and Crossplane manages communication to external resources like AWS, Azure or Google Cloud.

Crossplane architecture

There is a community Marketplace with a list of providers, some of which are given by Upbound (the company behind Crossplane))

Crossplane also allows the creation of custom Kubernetes APIs. Platform teams can combine external resources and simplify or customize the APIs presented to platform consumers.
And so you can create your own API platform: for example compose a GKE in Platform Engineering mode.

Watch video of this session

The IaC Evolution – on Open Source & Everything Else – Sharone Zitzman, RTFM Please; Mandi Walls, PagerDuty; Roni Francchi, approx0; Solomon Hykes, Dagger.io; Eran Bibi, Firefly

This session was not a presentation but a discussion with a panel of people who shared their opinions and vision on the evolution of Infra-as-Code.
Several interesting points:

  • What are the trends between fragmentation and consolidation of actors? the Terraform Debate, OpenTofu is back again!
  • Ultimately it seems that, for reasons of simplicity, the largest ecosystem will probably be the winner
  • State management remains difficult
  • We are quickly entering a “Teaching yet another tool” world, for what added value?
  • Cloud abstraction is necessary to bring less complexity
  • The Multi-language approach is becoming more and more necessary to let organizations choose according to their knowledge and needs.
  • Need to have an approach without vendor locking

But in the end, migrating between IaC tools is still a significant effort.

Watch the video of this session

Platform Engineering: feedback

During KubeCon EU in 2022 the trend was Platform Engineering, this year platform engineering is no longer a trend but a reality.
Infrastructure tools and frameworks, such as Kubernetes, service meshes, gateways, CI/CD, etc., have evolved well and are widely considered “classic technologies” (at least for this specialized audience) . The big challenge now is putting the pieces of the puzzle together to deliver value to internal customers.
See our article on what is Platform Engineering and the links with DevOps

The sponsor showcase was full of mentions about platforms, platform engineering and developer experience, including:
Humanitec, Backstage, Port, [Cortex](https: //www.cortex.io/), Kratix, Massdriver, [Mia-Platform](https: //mia-platform.eu/), Qovery, …

We are even seeing workload specification languages appear like Score and Radius to bring a layer of abstraction between the developer’s request and the technical implementation by the Platform Engineer.

The State of Backstage in 2024 – Ben Lambert & Patrik Oldsberg, Spotify

The essential Backstage for the IDP (Internal Developer Platform) is the project to which CNCF users have contributed the most!
After 4 years of existence there are more than 2.4k adopters and 1.4k contributors!!!

The project continues to work on its maturity on governance by proposing an improvement process: Backstage Enhancement Proposals which is largely inspired by the Kubernetes Enhancement Proposals (KEPs)

Here are the new areas of this project:

  • plugins in Backstage:
  • OpenAPI
    • Typed routers for schema-first applications
    • Typed clients for interaction with OpenAPI specifications
  • Scaffolders
    • Scaffolders SIG every bi-weekly
    • Retries & Rollbacks in case of problems: work in progress
    • Test Helper for stocks
    • Improved support for secrets: field extension “ui:field: Secret”
  • Microsite and document
    • Documentary audit of the CNCF: In progress
  • Notifications and signals
    • Notifications: Rich in-app notification and support for external notifications
    • Signals: Abstraction for pushing events from backend to client
  • Core Framework
    • Authentication improvements: secure by default, simplified service-to-service authentication
    • Evolution in progress
  • New Frontend System
    • In progress to bring about simplification
    • Second phase later in 2024: compatibility, plugins with transparent support for both systems
  • New Backend system
    • Ready for production!
    • The majority of plugins and modules have been migrated
    • Now default
  • Dynamic Features:
    • installation of plugins without having to redo a build
    • Depends on the new Backend and Frontend System
    • Work in progress

The evolution of Backstage continues with an axis of simplification which will, without a doubt, be very appreciated!

See the video of this session

Boosting Developer Platform Teams with Product Thinking – Samantha Coffman, Spotify

This session focused on the product approach to developing the platform by describing what a “Platform Product” is at Spotify.
And above all it is a change of mindset in “Platform as a Product Mindset” mode which changes the way of making an IDP and the management of internal products.

Having a portal is not enough, product orientation is about directing the desired outcomes and creating continuous, iterative value to achieve those outcomes:

IDP solution)

This approach also makes it possible to optimize operations, reduce costs and align with market trends.
The goal is to focus on results by analyzing value (what problem we are trying to solve) and viability (who the customers are and why they use the platform).
And get out of the “cool, let’s adopt a new fashionable technology”, “there’s only one need, let’s make a new solution” mode, etc.
It is necessary to understand the business need and develop “Customer Empathy” by talking regularly with your customer and not just a member of the team.

But also to understand the motivations of the different users and managers up to the CTO, this is the idea of “deployment empathy” and thus have the global vision to deliver a solution also adapted to decision-makers.

Platform Engineering

See the video of this session

Kubernetes: obviously!

Obviously, Kubernetes, the heart of KubeCon, was strongly present!
After more than 10 years in existence, it continues to evolve, enrich and adapt to the needs and new developments of the market.

10 Years of Kubernetes Patterns Evolution – Bilgin Ibryam, Diagrid & Roland Huss, Red Hat

This session explains the main patterns of K8S, understanding these models is crucial to understanding the Kubernetes mindset, and strengthening our ability to design so-called Cloud Native applications so that they best fit into this approach.
Patterns by category:

  • Foundational Patterns
    • Health Probe: Answers the question “how do I communicate the health status of an application to Kubernetes?” Via Startup, Readiness and Liveness health checks
    • Structural Patterns: “How to improve the functionality of an application without modifying it?” Via SideCar containers (Separation Of Concerns)
    • Behavioral Pattern: “How to ensure that only one application instance is active?” Vai StatefullSet with only 1 replica (Singleton concept), otherwise possibility of doing “In-Application Locking”:
      In-Application Locking Pattern
    • Configuration Pattern: “How to configure your application with immutable container images?” Via Volumes
    • Security Patterns: “How to protect the platform against deployed code?” Via isolation and SecurityContext (RunAsNonRoot, allowPrivilegeEscalation, readOnlyRootFilesystem, capabilities, …)
  • Advanced Patterns
    • Controller Pattern: “How to go from the current state to a declared target state?” Via State Reconciliation: Observe – Analyze – Act
    • Operator Pattern: “How to encapsulate operational knowledge in executable software?” : Via the operators (CRD + Controller)

There are many other patterns, see the book “Kubernetes Patterns”

Watch video of this session

Planning for Maturity: SIG Release’s Revamp for a More Stable Kubernetes – Adolfo García Veytia, Stacklok; Kat Cosgrove, Dell Technologies; Carlos Panato, Chain Guard; Joseph Sandoval, Adobe

I find it extremely interesting to watch the Kubernetes Release process by the Kubernetes SIG Release Group.
Kubernetes is a complex product composed of multiple components with multiple teams from different companies distributed around the world all with a demanding release cycle. And it works !!!

Due to this complexity, it is always complicated in business to be part of the Release team and expectations are not always met!
Kubernetes - SIG Release

Documentation is a crucial point and a prerequisite for improvements that are part of Kubernetes releases.
Having a “Docs Freeze” phase is important as are “Code Freeze” phases to bring about stabilization phases.

Kubernetes v1.30 “Uwubernets” is in progress:
Kubernetes - v1.30 Release

The 2024 roadmap for SIG Release is to have a more robust, faster and more flexible Release Pipeline.

  • Robust: The metadata creation process must be consistent between consecutive executions and resilient to infrastructure failures.
  • Fast: The time to create Kubernetes releases should be minimized.
  • Flexible: Future improvements to the process will be considered from the start, for example when thinking about expanding publishing metadata.

The k8s package build is done with OpenBuildService from Suse which allows you to work with all major Linux distributions. The packages are then published on pkgs.k8s.io
Release actions are done with GitHub Actions, using steps for creating Provenance, SBOMS, Check Dependencies and publishing release and release notes.
The entire release process is described in the kubernetes repository

The future is to create a truly secure Supply Chain: SBOMS, Provenance, Signature are already part of the process but few use them or rely on them.
There will be a new Kubernetes security flow with a Security Response Committee (SRC)

There is clearly a lot to learn from it to apply it on our own Release even on non-Opensource projects but in “InnerSource” mode

Watch the video of this session

Network Policy: The Future of Network Policy with AdminNetworkPolicy – Surya Seetharaman & Andrew Stoycos, Red Hat; Hunter Gregory, Microsoft

NetworkPolicies are an important Kubernetes feature managed by the SIG Network and help secure communications flows between pods.
They were designed for devs with an implicit API design (implicit refusal, explicit authorization list of rules).


Appearance of 2 new objects: AdminNetworkPolicy and BaselineAdminNetworkPolicy in v1alpha designed, this time, for Admins with an explicit mode.
The usecases:

  • As a cluster administrator, I want to apply non-editable administrative network policies that must be respected by all users of the cluster.
  • As a cluster administrator, I want to apply the tenant isolation model and delegate the responsibility of explicitly allowing traffic from other tenants to the tenant owner using NetworkPolicies.

The AdminNetworkPolicy API takes precedence over the NetworkPolicy API.
The BaselineAdminNetworkPolicy API corresponds to a default cluster security posture in the absence of NetworkPolicies

Several improvement proposals: Network Policy Enhancement Proposals (NPEP):

  • Cluster egress CIDR traffic controls
  • Egress CIDRGroup peer
  • Cluster egress FQDN traffic controls
  • Cluster ingress traffic controls
  • Better tenancy expressions
  • Conformance profiles

Future Plans: Network Policy V2?

  • DeveloperNetworkPolicy:
    • Reinvent developer-centric networking from the ground up.
    • AdminNetworkPolicy + DeveloperNetworkPolicy = “NetworkPolicy v2”
    • For now just an idea
  • Policy Assistant
    • CLI which summarizes Policy interactions.
    • Analysis engine, simulating the behavior of theoretical Policy.
    • Extends the Cyclonus project
    • excellent for Pre-Validation, experimentation and troubleshooting!

NetworkPolicy Assistant

See the video of this session

Comparing Sidecar-Less Service Mesh from Cilium and Istio – Christian Posta, Solo.io

Mesh services become essential when creating micro-services in Kubernetes, the advantages are multiple:
Service Mesh

Mesh services have largely adopted the Sidecar container approach in order to bring the network as close as possible to the application without being intrusive but also come with disadvantages (difficulty of size, security, challenging upgrade, etc.)

The use of eBPF for Service Mesh allows us to resolve some of these problems which does not exclude the need for ‘have a proxy:

Service Mesh Proxy

The comparison between Cilium and Istio in “Sidecarless” mode or in mixed mode (Separation of layers 4 and 7) clearly explains the different architectures and the operation of these 2 Mesh Services.

Watch the video of this session

We Tested and Compared 6 Database Operators. The Results are In! – Jérôme Petazzoni, Tiny Shell Script LLC & Alexandre Buisine, Enix

The question of the database in Kubernetes is recurring. Is it production ready if we use operators?
This comparison has chosen some Database operators (PostgresQL, MySql, MariaDB) in relation to the setup, Observability, backup restore, upgrade, etc. functionalities.

Service Mesh Proxy

Interesting feedback based on their actual production experience, and noted using Banana (good), Skull (worse) or a mix of both (improvement over previous versions) icons.

Watch the video of this session

Cloud Native Storage: The CNCF Storage TAG Projects, Technology & Landscape – Raffaele Spazzoli, Red Hat; Alex Chircop, Akamai

To go further from the previous session on database operators, this presentation explains the advances on Kubernetes storage: More Statefull workload to K8S: automation, scalability, performance, failover!!!

There is a white paper to clarify terminology, explain how these elements are currently used in production in public or private cloud environments, and compare different technology areas .
Storage Attributes

A focus on databases with also a white paper “Data on Kubernetes Whitepaper – Database Patterns” which explains the operating modes well:
Storage Databases

It is interesting to see that the Operatorhub contains 349 operators including 47 db operators with 9 PostgreSQL operators.

An essential point when managing data in Kubernetes is the Disater Recovery aspect
There are several approaches described in this document

The next focuses will be on data and AI/ML workload…
As well as a performance white paper.

Watch the video of this session

Security: remains a major subject!

Security was everywhere, on all themes with zero-trust approaches.
In particular, secure supply chains have been widely mentioned in container building tools (alongside SBOMS and SLSA): see the Supply Chain chapter above.
Network security was also featured, with interesting mentions of Cilium (and eBPF in this context), Linkerd and Istio.

Choose Your Own Adventure: The Struggle for Security – Whitney Lee, VMware & Viktor Farcic, Upbound

I remember this presentation where the public was able to choose the security tools to protect an application in Kubernetes:

Security tools

In addition to the successive demonstrations of the tools, what was very interesting was to see the choice of tools by the audience:

Watch the video of this session


KubeCon 2024 marked a turning point in the evolution of Kubernetes and the Cloud Native ecosystem, with a particular focus on the integration of artificial intelligence, while highlighting the importance of security, Supply Chain, sustainability, infrastructure-as-code and WebAssembly.
These themes show maturity and a long-term vision, paving the way for future innovations.
The broad spectrum of topics covered, from secure supply chain to best practices in observability and operations, confirms the community’s commitment to continuous improvement. Kubernetes, at the heart of these discussions, proves once again that it remains at the forefront of technology, ready to meet the future challenges of Cloud Native.

Leave a Reply

  Edit this page