$ muon samu -j 0
Segmentation fault
ninja supports -j 0
to mean "start unlimited jobs in parallel", analogous to make -j
. samurai sets buildopts.maxjobs = -1
in this case (e.g. SIZE_MAX
).
However, since muon's samurai allocates all maxjobs
upfront, this causes an attempted allocation of SIZE_MAX
jobs. Even though samu_arena_alloc
can't fail, SIZE_MAX * sizeof(jobs[0])
is larger than SIZE_MAX
, so samu_reallocarray
fails and returns NULL. Then, when samu_build()
tries to access the NULL
array, it crashes.
Here are some possible solutions:
-j 0
. I'm not convinced that unlimited parallelism is a good feature, so it's probably fine to not support it (change num < 0
to num <= 0
in samu_jobsflag()
).pollfd
array, so you could allocate the job structs separately and make jobs
a linked list.In either case, I think the fatal
should be added back to samu_xreallocarray
, since callers expect that it always succeeds.
I did not know about -j 0. I agree it is probably fine if muon's samu doesn't support this feature. A linked list of jobs might have been a better approach, but I really like the simplicity of just upfront allocating. I've gone ahead and pushed a commit with #1 in it. If anyone needs this feature I'll reconsider.