Resource Management for Pods and Containers (2024)

When you specify a Pod, you can optionally specify how much of each resource acontainer needs. The most common resources to specify are CPU and memory(RAM); there are others.

When you specify the resource request for containers in a Pod, thekube-scheduler uses this information to decide which node to place the Pod on.When you specify a resource limit for a container, the kubelet enforces thoselimits so that the running container is not allowed to use more of that resourcethan the limit you set. The kubelet also reserves at least the request amount ofthat system resource specifically for that container to use.

Requests and limits

If the node where a Pod is running has enough of a resource available, it's possible (andallowed) for a container to use more resource than its request for that resource specifies.However, a container is not allowed to use more than its resource limit.

For example, if you set a memory request of 256 MiB for a container, and that container is ina Pod scheduled to a Node with 8GiB of memory and no other Pods, then the container can try to usemore RAM.

If you set a memory limit of 4GiB for that container, the kubelet (andcontainer runtime) enforce the limit.The runtime prevents the container from using more than the configured resource limit. For example:when a process in the container tries to consume more than the allowed amount of memory,the system kernel terminates the process that attempted the allocation, with an out of memory(OOM) error.

Limits can be implemented either reactively (the system intervenes once it sees a violation)or by enforcement (the system prevents the container from ever exceeding the limit). Differentruntimes can have different ways to implement the same restrictions.

Note:

If you specify a limit for a resource, but do not specify any request, and no admission-timemechanism has applied a default request for that resource, then Kubernetes copies the limityou specified and uses it as the requested value for the resource.

Resource types

CPU and memory are each a resource type. A resource type has a base unit.CPU represents compute processing and is specified in units of Kubernetes CPUs.Memory is specified in units of bytes.For Linux workloads, you can specify huge page resources.Huge pages are a Linux-specific feature where the node kernel allocates blocks of memorythat are much larger than the default page size.

For example, on a system where the default page size is 4KiB, you could specify a limit,hugepages-2Mi: 80Mi. If the container tries allocating over 40 2MiB huge pages (atotal of 80 MiB), that allocation fails.

Note:

You cannot overcommit hugepages-* resources.This is different from the memory and cpu resources.

CPU and memory are collectively referred to as compute resources, or resources. Computeresources are measurable quantities that can be requested, allocated, andconsumed. They are distinct fromAPI resources. API resources, such as Pods andServices are objects that can be read and modifiedthrough the Kubernetes API server.

Resource requests and limits of Pod and container

For each container, you can specify resource limits and requests,including the following:

  • spec.containers[].resources.limits.cpu
  • spec.containers[].resources.limits.memory
  • spec.containers[].resources.limits.hugepages-<size>
  • spec.containers[].resources.requests.cpu
  • spec.containers[].resources.requests.memory
  • spec.containers[].resources.requests.hugepages-<size>

Although you can only specify requests and limits for individual containers,it is also useful to think about the overall resource requests and limits fora Pod.For a particular resource, a Pod resource request/limit is the sum of theresource requests/limits of that type for each container in the Pod.

Resource units in Kubernetes

CPU resource units

Limits and requests for CPU resources are measured in cpu units.In Kubernetes, 1 CPU unit is equivalent to 1 physical CPU core,or 1 virtual core, depending on whether the node is a physical hostor a virtual machine running inside a physical machine.

Fractional requests are allowed. When you define a container withspec.containers[].resources.requests.cpu set to 0.5, you are requesting halfas much CPU time compared to if you asked for 1.0 CPU.For CPU resource units, the quantity expression 0.1 is equivalent to theexpression 100m, which can be read as "one hundred millicpu". Some people say"one hundred millicores", and this is understood to mean the same thing.

CPU resource is always specified as an absolute amount of resource, never as a relative amount. For example,500m CPU represents the roughly same amount of computing power whether that containerruns on a single-core, dual-core, or 48-core machine.

Note:

Kubernetes doesn't allow you to specify CPU resources with a precision finer than1m or 0.001 CPU. To avoid accidentally using an invalid CPU quantity, it's useful to specify CPU units using the milliCPU forminstead of the decimal form when using less than 1 CPU unit.

For example, you have a Pod that uses 5m or 0.005 CPU and would like to decreaseits CPU resources. By using the decimal form, it's harder to spot that 0.0005 CPUis an invalid value, while by using the milliCPU form, it's easier to spot that0.5m is an invalid value.

Memory resource units

Limits and requests for memory are measured in bytes. You can express memory asa plain integer or as a fixed-point number using one of thesequantity suffixes:E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,Mi, Ki. For example, the following represent roughly the same value:

128974848, 129e6, 129M, 128974848000m, 123Mi

Pay attention to the case of the suffixes. If you request 400m of memory, this is a requestfor 0.4 bytes. Someone who types that probably meant to ask for 400 mebibytes (400Mi)or 400 megabytes (400M).

Container resources example

The following Pod has two containers. Both containers are defined with a request for0.25 CPUand 64MiB (226 bytes) of memory. Each container has a limit of 0.5CPU and 128MiB of memory. You can say the Pod has a request of 0.5 CPU and 128MiB of memory, and a limit of 1 CPU and 256MiB of memory.

---apiVersion: v1kind: Podmetadata: name: frontendspec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"

How Pods with resource requests are scheduled

When you create a Pod, the Kubernetes scheduler selects a node for the Pod torun on. Each node has a maximum capacity for each of the resource types: theamount of CPU and memory it can provide for Pods. The scheduler ensures that,for each resource type, the sum of the resource requests of the scheduledcontainers is less than the capacity of the node.Note that although actual memoryor CPU resource usage on nodes is very low, the scheduler still refuses to placea Pod on a node if the capacity check fails. This protects against a resourceshortage on a node when resource usage later increases, for example, during adaily peak in request rate.

How Kubernetes applies resource requests and limits

When the kubelet starts a container as part of a Pod, the kubelet passes that container'srequests and limits for memory and CPU to the container runtime.

On Linux, the container runtime typically configureskernel cgroups that apply and enforce thelimits you defined.

  • The CPU limit defines a hard ceiling on how much CPU time that the container can use.During each scheduling interval (time slice), the Linux kernel checks to see if thislimit is exceeded; if so, the kernel waits before allowing that cgroup to resume execution.
  • The CPU request typically defines a weighting. If several different containers (cgroups)want to run on a contended system, workloads with larger CPU requests are allocated moreCPU time than workloads with small requests.
  • The memory request is mainly used during (Kubernetes) Pod scheduling. On a node that usescgroups v2, the container runtime might use the memory request as a hint to setmemory.min and memory.low.
  • The memory limit defines a memory limit for that cgroup. If the container tries toallocate more memory than this limit, the Linux kernel out-of-memory subsystem activatesand, typically, intervenes by stopping one of the processes in the container that triedto allocate memory. If that process is the container's PID 1, and the container is markedas restartable, Kubernetes restarts the container.
  • The memory limit for the Pod or container can also apply to pages in memory backedvolumes, such as an emptyDir. The kubelet tracks tmpfs emptyDir volumes as containermemory use, rather than as local ephemeral storage.

If a container exceeds its memory request and the node that it runs on becomes short ofmemory overall, it is likely that the Pod the container belongs to will beevicted.

A container might or might not be allowed to exceed its CPU limit for extended periods of time.However, container runtimes don't terminate Pods or containers for excessive CPU usage.

To determine whether a container cannot be scheduled or is being killed due to resource limits,see the Troubleshooting section.

Monitoring compute & memory resource usage

The kubelet reports the resource usage of a Pod as part of the Podstatus.

If optional tools for monitoringare available in your cluster, then Pod resource usage can be retrieved eitherfrom the Metrics APIdirectly or from your monitoring tools.

Local ephemeral storage

FEATURE STATE: Kubernetes v1.25 [stable]

Nodes have local ephemeral storage, backed bylocally-attached writeable devices or, sometimes, by RAM."Ephemeral" means that there is no long-term guarantee about durability.

Pods use ephemeral local storage for scratch space, caching, and for logs.The kubelet can provide scratch space to Pods using local ephemeral storage tomount emptyDirvolumes into containers.

The kubelet also uses this kind of storage to holdnode-level container logs,container images, and the writable layers of running containers.

Caution:

If a node fails, the data in its ephemeral storage can be lost.Your applications cannot expect any performance SLAs (disk IOPS for example)from local ephemeral storage.

Note:

To make the resource quota work on ephemeral-storage, two things need to be done:

  • An admin sets the resource quota for ephemeral-storage in a namespace.
  • A user needs to specify limits for the ephemeral-storage resource in the Pod spec.

If the user doesn't specify the ephemeral-storage resource limit in the Pod spec,the resource quota is not enforced on ephemeral-storage.

Kubernetes lets you track, reserve and limit the amountof ephemeral local storage a Pod can consume.

Configurations for local ephemeral storage

Kubernetes supports two ways to configure local ephemeral storage on a node:

  • Single filesystem
  • Two filesystems

In this configuration, you place all different kinds of ephemeral local data(emptyDir volumes, writeable layers, container images, logs) into one filesystem.The most effective way to configure the kubelet means dedicating this filesystemto Kubernetes (kubelet) data.

The kubelet also writesnode-level container logsand treats these similarly to ephemeral local storage.

The kubelet writes logs to files inside its configured log directory (/var/logby default); and has a base directory for other locally stored data(/var/lib/kubelet by default).

Typically, both /var/lib/kubelet and /var/log are on the system root filesystem,and the kubelet is designed with that layout in mind.

Your node can have as many other filesystems, not used for Kubernetes,as you like.

You have a filesystem on the node that you're using for ephemeral data thatcomes from running Pods: logs, and emptyDir volumes. You can use this filesystemfor other data (for example: system logs not related to Kubernetes); it can evenbe the root filesystem.

The kubelet also writesnode-level container logsinto the first filesystem, and treats these similarly to ephemeral local storage.

You also use a separate filesystem, backed by a different logical storage device.In this configuration, the directory where you tell the kubelet to placecontainer image layers and writeable layers is on this second filesystem.

The first filesystem does not hold any image layers or writeable layers.

Your node can have as many other filesystems, not used for Kubernetes,as you like.

The kubelet can measure how much local storage it is using. It does this providedthat you have set up the node using one of the supported configurations for localephemeral storage.

If you have a different configuration, then the kubelet does not apply resourcelimits for ephemeral local storage.

Note:

The kubelet tracks tmpfs emptyDir volumes as container memory use, ratherthan as local ephemeral storage.

Note:

The kubelet will only track the root filesystem for ephemeral storage. OS layouts that mount a separate disk to /var/lib/kubelet or /var/lib/containers will not report ephemeral storage correctly.

Setting requests and limits for local ephemeral storage

You can specify ephemeral-storage for managing local ephemeral storage. Eachcontainer of a Pod can specify either or both of the following:

  • spec.containers[].resources.limits.ephemeral-storage
  • spec.containers[].resources.requests.ephemeral-storage

Limits and requests for ephemeral-storage are measured in byte quantities.You can express storage as a plain integer or as a fixed-point number using one of these suffixes:E, P, T, G, M, k. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,Mi, Ki. For example, the following quantities all represent roughly the same value:

  • 128974848
  • 129e6
  • 129M
  • 123Mi

Pay attention to the case of the suffixes. If you request 400m of ephemeral-storage, this is a requestfor 0.4 bytes. Someone who types that probably meant to ask for 400 mebibytes (400Mi)or 400 megabytes (400M).

In the following example, the Pod has two containers. Each container has a request of2GiB of local ephemeral storage. Each container has a limit of 4GiB of local ephemeralstorage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, anda limit of 8GiB of local ephemeral storage. 500Mi of that limit could beconsumed by the emptyDir volume.

apiVersion: v1kind: Podmetadata: name: frontendspec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: ephemeral-storage: "2Gi" limits: ephemeral-storage: "4Gi" volumeMounts: - name: ephemeral mountPath: "/tmp" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: "2Gi" limits: ephemeral-storage: "4Gi" volumeMounts: - name: ephemeral mountPath: "/tmp" volumes: - name: ephemeral emptyDir: sizeLimit: 500Mi

How Pods with ephemeral-storage requests are scheduled

When you create a Pod, the Kubernetes scheduler selects a node for the Pod torun on. Each node has a maximum amount of local ephemeral storage it can provide for Pods.For more information, seeNode Allocatable.

The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity of the node.

Ephemeral storage consumption management

If the kubelet is managing local ephemeral storage as a resource, then thekubelet measures storage use in:

  • emptyDir volumes, except tmpfs emptyDir volumes
  • directories holding node-level logs
  • writeable container layers

If a Pod is using more ephemeral storage than you allow it to, the kubeletsets an eviction signal that triggers Pod eviction.

For container-level isolation, if a container's writable layer and logusage exceeds its storage limit, the kubelet marks the Pod for eviction.

For pod-level isolation the kubelet works out an overall Pod storage limit bysumming the limits for the containers in that Pod. In this case, if the sum ofthe local ephemeral storage usage from all containers and also the Pod's emptyDirvolumes exceeds the overall Pod storage limit, then the kubelet also marks the Podfor eviction.

Caution:

If the kubelet is not measuring local ephemeral storage, then a Podthat exceeds its local storage limit will not be evicted for breachinglocal storage resource limits.

However, if the filesystem space for writeable container layers, node-level logs,or emptyDir volumes falls low, the nodetaints itself as short on local storageand this taint triggers eviction for any Pods that don't specifically tolerate the taint.

See the supported configurationsfor ephemeral local storage.

The kubelet supports different ways to measure Pod storage use:

  • Periodic scanning
  • Filesystem project quota

The kubelet performs regular, scheduled checks that scan eachemptyDir volume, container log directory, and writeable container layer.

The scan measures how much space is used.

Note:

In this mode, the kubelet does not track open file descriptorsfor deleted files.

If you (or a container) create a file inside an emptyDir volume,something then opens that file, and you delete the file while it isstill open, then the inode for the deleted file stays until you closethat file but the kubelet does not categorize the space as in use.

FEATURE STATE: Kubernetes v1.15 [alpha]

Project quotas are an operating-system level feature for managingstorage use on filesystems. With Kubernetes, you can enable projectquotas for monitoring storage use. Make sure that the filesystembacking the emptyDir volumes, on the node, provides project quota support.For example, XFS and ext4fs offer project quotas.

Note:

Project quotas let you monitor storage use; they do not enforce limits.

Kubernetes uses project IDs starting from 1048576. The IDs in use areregistered in /etc/projects and /etc/projid. If project IDs inthis range are used for other purposes on the system, those projectIDs must be registered in /etc/projects and /etc/projid so thatKubernetes does not use them.

Quotas are faster and more accurate than directory scanning. When adirectory is assigned to a project, all files created under adirectory are created in that project, and the kernel merely has tokeep track of how many blocks are in use by files in that project.If a file is created and deleted, but has an open file descriptor,it continues to consume space. Quota tracking records that space accuratelywhereas directory scans overlook the storage used by deleted files.

If you want to use project quotas, you should:

  • Enable the LocalStorageCapacityIsolationFSQuotaMonitoring=truefeature gateusing the featureGates field in thekubelet configurationor the --feature-gates command line flag.

  • Ensure that the root filesystem (or optional runtime filesystem)has project quotas enabled. All XFS filesystems support project quotas.For ext4 filesystems, you need to enable the project quota tracking featurewhile the filesystem is not mounted.

    # For ext4, with /dev/block-device not mountedsudo tune2fs -O project -Q prjquota /dev/block-device
  • Ensure that the root filesystem (or optional runtime filesystem) ismounted with project quotas enabled. For both XFS and ext4fs, themount option is named prjquota.

Extended resources

Extended resources are fully-qualified resource names outside thekubernetes.io domain. They allow cluster operators to advertise and users toconsume the non-Kubernetes-built-in resources.

There are two steps required to use Extended Resources. First, the clusteroperator must advertise an Extended Resource. Second, users must request theExtended Resource in Pods.

Managing extended resources

Node-level extended resources

Node-level extended resources are tied to nodes.

Device plugin managed resources

See DevicePluginfor how to advertise device plugin managed resources on each node.

Other resources

To advertise a new node-level extended resource, the cluster operator cansubmit a PATCH HTTP request to the API server to specify the availablequantity in the status.capacity for a node in the cluster. After thisoperation, the node's status.capacity will include a new resource. Thestatus.allocatable field is updated automatically with the new resourceasynchronously by the kubelet.

Because the scheduler uses the node's status.allocatable value whenevaluating Pod fitness, the scheduler only takes account of the new value afterthat asynchronous update. There may be a short delay between patching thenode capacity with a new resource and the time when the first Pod that requeststhe resource can be scheduled on that node.

Example:

Here is an example showing how to use curl to form an HTTP request thatadvertises five "example.com/foo" resources on node k8s-node-1 whose masteris k8s-master.

curl --header "Content-Type: application/json-patch+json" \--request PATCH \--data '[{"op": "add", "path": "/status/capacity/example.com~1foo", "value": "5"}]' \http://k8s-master:8080/api/v1/nodes/k8s-node-1/status

Note:

In the preceding request, ~1 is the encoding for the character /in the patch path. The operation path value in JSON-Patch is interpreted as aJSON-Pointer. For more details, seeIETF RFC 6901, section 3.

Cluster-level extended resources

Cluster-level extended resources are not tied to nodes. They are usually managedby scheduler extenders, which handle the resource consumption and resource quota.

You can specify the extended resources that are handled by scheduler extendersin scheduler configuration

Example:

The following configuration for a scheduler policy indicates that thecluster-level extended resource "example.com/foo" is handled by the schedulerextender.

  • The scheduler sends a Pod to the scheduler extender only if the Pod requests"example.com/foo".
  • The ignoredByScheduler field specifies that the scheduler does not checkthe "example.com/foo" resource in its PodFitsResources predicate.
{ "kind": "Policy", "apiVersion": "v1", "extenders": [ { "urlPrefix":"<extender-endpoint>", "bindVerb": "bind", "managedResources": [ { "name": "example.com/foo", "ignoredByScheduler": true } ] } ]}

Consuming extended resources

Users can consume extended resources in Pod specs like CPU and memory.The scheduler takes care of the resource accounting so that no more than theavailable amount is simultaneously allocated to Pods.

The API server restricts quantities of extended resources to whole numbers.Examples of valid quantities are 3, 3000m and 3Ki. Examples ofinvalid quantities are 0.5 and 1500m (because 1500m would result in 1.5).

Note:

Extended resources replace Opaque Integer Resources.Users can use any domain name prefix other than kubernetes.io which is reserved.

To consume an extended resource in a Pod, include the resource name as a keyin the spec.containers[].resources.limits map in the container spec.

Note:

Extended resources cannot be overcommitted, so request and limitmust be equal if both are present in a container spec.

A Pod is scheduled only if all of the resource requests are satisfied, includingCPU, memory and any extended resources. The Pod remains in the PENDING stateas long as the resource request cannot be satisfied.

Example:

The Pod below requests 2 CPUs and 1 "example.com/foo" (an extended resource).

apiVersion: v1kind: Podmetadata: name: my-podspec: containers: - name: my-container image: myimage resources: requests: cpu: 2 example.com/foo: 1 limits: example.com/foo: 1

PID limiting

Process ID (PID) limits allow for the configuration of a kubeletto limit the number of PIDs that a given Pod can consume. SeePID Limiting for information.

Troubleshooting

My Pods are pending with event message FailedScheduling

If the scheduler cannot find any node where a Pod can fit, the Pod remainsunscheduled until a place can be found. AnEvent is producedeach time the scheduler fails to find a place for the Pod. You can use kubectlto view the events for a Pod; for example:

kubectl describe pod frontend | grep -A 9999999999 Events
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 23s default-scheduler 0/42 nodes available: insufficient cpu

In the preceding example, the Pod named "frontend" fails to be scheduled due toinsufficient CPU resource on any node. Similar error messages can also suggestfailure due to insufficient memory (PodExceedsFreeMemory). In general, if a Podis pending with a message of this type, there are several things to try:

  • Add more nodes to the cluster.
  • Terminate unneeded Pods to make room for pending Pods.
  • Check that the Pod is not larger than all the nodes. For example, if all thenodes have a capacity of cpu: 1, then a Pod with a request of cpu: 1.1 willnever be scheduled.
  • Check for node taints. If most of your nodes are tainted, and the new Pod doesnot tolerate that taint, the scheduler only considers placements onto theremaining nodes that don't have that taint.

You can check node capacities and amounts allocated with thekubectl describe nodes command. For example:

kubectl describe nodes e2e-test-node-pool-4lw4
Name: e2e-test-node-pool-4lw4[ ... lines removed for clarity ...]Capacity: cpu: 2 memory: 7679792Ki pods: 110Allocatable: cpu: 1800m memory: 7474992Ki pods: 110[ ... lines removed for clarity ...]Non-terminated Pods: (5 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%) kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%) kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%) kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%) kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)

In the preceding output, you can see that if a Pod requests more than 1.120 CPUsor more than 6.23Gi of memory, that Pod will not fit on the node.

By looking at the “Pods” section, you can see which Pods are taking up space onthe node.

The amount of resources available to Pods is less than the node capacity becausesystem daemons use a portion of the available resources. Within the Kubernetes API,each Node has a .status.allocatable field(see NodeStatusfor details).

The .status.allocatable field describes the amount of resources that are availableto Pods on that node (for example: 15 virtual CPUs and 7538 MiB of memory).For more information on node allocatable resources in Kubernetes, seeReserve Compute Resources for System Daemons.

You can configure resource quotasto limit the total amount of resources that a namespace can consume.Kubernetes enforces quotas for objects in particular namespace when there is aResourceQuota in that namespace.For example, if you assign specific namespaces to different teams, youcan add ResourceQuotas into those namespaces. Setting resource quotas helps toprevent one team from using so much of any resource that this over-use affects other teams.

You should also consider what access you grant to that namespace:full write access to a namespace allows someone with that access to remove anyresource, including a configured ResourceQuota.

My container is terminated

Your container might get terminated because it is resource-starved. To checkwhether a container is being killed because it is hitting a resource limit, callkubectl describe pod on the Pod of interest:

kubectl describe pod simmemleak-hra99

The output is similar to:

Name: simmemleak-hra99Namespace: defaultImage(s): saadali/simmemleakNode: kubernetes-node-tf0f/10.240.216.66Labels: name=simmemleakStatus: RunningReason:Message:IP: 10.244.2.75Containers: simmemleak: Image: saadali/simmemleak:latest Limits: cpu: 100m memory: 50Mi State: Running Started: Tue, 07 Jul 2019 12:54:41 -0700 Last State: Terminated Reason: OOMKilled Exit Code: 137 Started: Fri, 07 Jul 2019 12:54:30 -0700 Finished: Fri, 07 Jul 2019 12:54:33 -0700 Ready: False Restart Count: 5Conditions: Type Status Ready FalseEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 42s default-scheduler Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f Normal Pulled 41s kubelet Container image "saadali/simmemleak:latest" already present on machine Normal Created 41s kubelet Created container simmemleak Normal Started 40s kubelet Started container simmemleak Normal Killing 32s kubelet Killing container with id ead3fb35-5cf5-44ed-9ae1-488115be66c6: Need to kill Pod

In the preceding example, the Restart Count: 5 indicates that the simmemleakcontainer in the Pod was terminated and restarted five times (so far).The OOMKilled reason shows that the container tried to use more memory than its limit.

Your next step might be to check the application code for a memory leak. If youfind that the application is behaving how you expect, consider setting a highermemory limit (and possibly request) for that container.

What's next

  • Get hands-on experience assigning Memory resources to containers and Pods.
  • Get hands-on experience assigning CPU resources to containers and Pods.
  • Read how the API reference defines a containerand its resource requirements
  • Read about project quotas in XFS
  • Read more about the kube-scheduler configuration reference (v1)
  • Read more about Quality of Service classes for Pods
Resource Management for Pods and Containers (2024)

References

Top Articles
Latest Posts
Article information

Author: Greg O'Connell

Last Updated:

Views: 5741

Rating: 4.1 / 5 (42 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Greg O'Connell

Birthday: 1992-01-10

Address: Suite 517 2436 Jefferey Pass, Shanitaside, UT 27519

Phone: +2614651609714

Job: Education Developer

Hobby: Cooking, Gambling, Pottery, Shooting, Baseball, Singing, Snowboarding

Introduction: My name is Greg O'Connell, I am a delightful, colorful, talented, kind, lively, modern, tender person who loves writing and wants to share my knowledge and understanding with you.