App for Cloudflare® Pro

App for Cloudflare® Pro 1.9.8

  • Item seller Item seller Shawn
  • Featured

WP CLI R2 Sync getting killed by OOM

chadneu

New member
Code:
2026-01-28T22:46:21.546731+00:00 server kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-0.slice/session-3.scope,task=php,pid=15191,uid=0
2026-01-28T22:46:21.546732+00:00 server kernel: Out of memory: Killed process 15191 (php) total-vm:32369584kB, anon-rss:30888688kB, file-rss:2816kB, shmem-rss:0kB, UID:0 pgtables:60664kB oom_score_adj:0
2026-01-28T22:46:21.590158+00:00 server systemd[1]: session-3.scope: A process of this unit has been killed by the OOM killer.

My server has 32gb of ram. Anyway around this? Should the wp-cli thing be dumping memory as it goes along?
 
In theory the memory used should be fairly minimal. By default it uploads data as a stream (effectively a byte by byte data stream that doesn’t need to load the entire file being uploaded into memory). If it’s happening when very large media files are being uploaded, your server may not support streams (if that’s the case, it has to fall back to reading the file into memory).

Do you have any particularly large media files (like video) by chance?
 
With files that size, my gut says it's a situation where the server doesn't support streams, so it falls back to reading the file into memory. A couple things you may be able to check:
  • Make sure your server's PHP supports fopen()
  • Make sure WordPress is using Curl for it's transport (it does by default)
  • The version of Curl that PHP is using should be 7.1.0 or higher (probably not this considering 7.1.0 came out over 25 years ago)
If you want to test if it's specifically for the very large videos, you can grab the post_id (when you click on the media in the dashboard, it's the itemid in the URL), and run the CLI command just for that specific media item like so:

Bash:
wp app-for-cf migrate-media --post-id=xxxxx

The *simple* solution of course would be to up the memory temporarily just to move stuff to R2, but that doesn't solve the issue if you are going to be uploading new files that are huge down the road.
 
It seems to fail at around 27% so I'll start going through files to find a few huge ones. I'll report back. Thanks.

Curl is 8.5. fopen is enabled, but allow_url_fopen was set to Off. Would turning that to On help? edit: nah allow url didnt help.
 
Last edited:
it’s really just fopen itself. If opening the file as a stream (with fopen) fails for whatever reason, it falls back to reading the file into memory.

I could put together a custom debugging version that outputs some additional info when using CLI. Might help track down the “why” it may not be working on the server.
 
Back
Top