use fmt;
use types;
export fn main() void = {
let x: [][]int = alloc([[]...], 16);
for (let i = 0z; i < len(x); i += 1) {
const bucket = &x[i];
fmt::println(len(bucket), (&bucket: *types::slice).data)!;
};
};
This causes invalid frees of stack addresses when freeing the empty slices. Bad!
The root cause here is that we try to fill the outer slice with a single
rt::memcpy
, which just copies a single[]
fromx[0]
tox[1]
. Solution for most cases is to usert::memset
instead, but that's not possible when the memory we want to duplicate is not byte-periodic. For those cases we can either
- introduce a new rt function that does something like
memset
, but with a pattern of arbitrary length or- make repeated calls to
memcpy
, each with double the length of the previous one.Which one of those sounds better?
That doesn't sound right to me, it should work as-is. The [] value should have a null data value, and then memcpy'ing this to each item in the parent array should just copy the null value. No?
Yeah, that would work, but calling memcpy
len(parent array)
times is not ideal, hence why I proposed some possible improvements over that.
I mean, I think we still don't understand the problem. Posing solutions before identifying the problem is putting the cart before the horse.
Everything I said in this thread was based on a flawed understanding of the relevant harec code so it can safely be disregarded.
I think the actual bug here is that you're casting a **[]int to a *types::slice. x[i]: []int (and (&x[i]: *types::slice).data) is null, as expected. ~sircmpwn, do you remember what the original context for this was?
The issue is linked here: https://git.sr.ht/~sircmpwn/hare-json/tree/ba8506a2/item/encoding/json/value.ha#L188
(I believe the correct action there would be to
free(*bucket);
.
I agree