Ok I have to give Austral credit for calling this out. Why don't we make the stdlib actually require capabilities? Basically have an object passed to the program entry point that has capabilities for standard streams, file access, network access, etc. Then those get passed to the portions of the program that actually need them. Then when, for instance, you start the program in WASI, it can check which capabilities it has vs which are needed and do something sensible if you don't have the permissions it needs.
I imagine all of these objects in the end being zero-sized types that can be shared freely, for most OS API's at least. They can also be copied freely. They are simply tokens to mark that your code can call
Unlike OS capabilities or CHERI or such, these are intended to be a robustness tool, not a security tool. We're running potentially-unsafe code, there's nothing to actually stop you from conjuring forth a capability object out of nothing besides the fact that the language makes it inconvenient.
Exception: Logging/debugging. That is one place where needing to pass in a handle explicitly to a function that did it would literally cripple you any time you really wanted it.
API mockup:
fn main(caps: Caps) =
let files = caps.filesystem()
let net = caps.network()
let time = caps.clock()
let assets = Assets:new(files, net)
let gamestate = Gamestate:new()
gamestate:run(time, assets)
end
Speculative way to forge a capability, which you shouldn't do but the language can't really disallow right now:
fn evil() NetworkCapability =
unsafe
let horror NetworkCapability ^const = ptr:from(0)
// Not sure if dereferencing a pointer to a ZST would be UB
// or something, but by the rules for the lang I'm imagining
// so far it would just be a no-op
let forgery = horror^
forgery
end
end
You could forbid this by using a "no unsafe code" property, but a) that is probably a quite limiting property and b) there's always compiler bugs, API bugs, and other random shit. The evil()
function wouldn't work on WASI anyway, because it has actual capabilities and the ability to enforce them via sandboxing. Security is the OS's job, not ours.
Regarding logging: I think I recall reading somewhere that in languages with explicit tracking of capabilities, some functions could still explicitly state that they implicitly pull some capability from the environment? though not sure how to then make it not part of function signature...
Or maybe some functions (logging/debugging) could be marked as globally exempt from capabilities tracking? (though, this then also reminds me of the log4j incident...)
Yeah a lot of it is tradeoffs between "I want this code only do exactly what I tell it to" vs "I want this code to be able to do whatever it needs to for its job". It's tough 'cause there's no one-size-fits-all answer. I think the job of the language is to make what you want to express easy for both cases, and the job of the stdlib to make the limited option preferable but have a clear path to the more flexible option.
A lispy system for item definitions lets the user redefine
defn
- with that, it's possible to add some extra effects that're implicitly included in every function signature as long as you're using the overriddendefn
macro. I really love this, it makes it practical to track very low level effects without making it the user's problem. TheDebug
effect, for example, can be part of the signature of every instruction and allow breakpoints to be typesafe. It can be really valuable for writing embedded applications - it isn't necessarily safe to insert breakpoints in interrupt contexts etc.
Good point, but I much prefer not having random parts of the language randomly boobytrapped. Something like Rust's
#[]
annotations are more to my liking, you can customize them however you want but they're never secret.