we’re working on a third party solution for this. Should have some updates that sandbox cargo builds shortly.
https://github.com/phylum-dev/birdcage
It’s a cross-platform sandbox that works on Linux via Landlock and macOS via Seatbelt. We’ve rolled this into our CLI (https://github.com/phylum-dev/cli) so you can do thinks like:
phylum
For example for npm, which currently uses the sandbox:
phylum npm install
We’re adding this to cargo to similarly sandbox crate installations. Would love feedback and thoughts on our sandbox!
I don’t get it.
A Rust procedural macro (proc macro) is a metaprogramming feature in Rust that allows you to define custom syntax extensions and code transformations. They operate on the abstract syntax tree (AST) of Rust code during compilation and can generate or modify code based on annotations or custom syntax.
Sandboxing a Rust proc macro refers to restricting the capabilities of the macro to improve security and prevent potentially harmful code execution. There are several reasons why someone might want to sandbox a proc macro:
-
Security: Untrusted code can be executed during the macro expansion process. To prevent malicious code execution or code that could access sensitive information, sandboxing techniques are employed.
-
Preventing unintended side effects: Some proc macros might inadvertently introduce side effects like file I/O or network requests. Sandboxing can limit these actions to ensure the macro only performs intended transformations.
-
Resource control: To manage system resources, a sandboxed proc macro can be configured to run within resource limits, preventing excessive memory or CPU usage.
-
Isolation: Sandboxing helps keep the macro’s execution isolated from the rest of the compilation process, reducing the risk of interfering with other parts of the code.
Sandboxing a Rust proc macro typically involves using crates like
sandbox
orcap-std
to restrict the macro’s capabilities and limit its access to the system. This ensures that the macro operates within a controlled environment, enhancing the overall safety of code compilation and execution.-GPT
I didn’t get it either.
Seems to me if your code will be this unpredictable, you should only run it on an air gapped machine
It’s just compile time code execution.
The difference between those macros („procedural macros“) and regular macros is that while regular macros are pretty much only templated code that is unfolded, proc macros contain code that is run at compile time, so they are more powerful but also more dangerous from a security perspective as you would expect just compiling a program to be safe.
Also: is copy pasting ChatGPT answers a thing now even when you, as you said, don’t even know what it means??
Also: is copy pasting ChatGPT answers a thing now even when you, as you said, don’t even know what it means??
As long as it’s annotated as such I don’t mind, even if it’s wrong. And if it’s wrong you’re more likely to get people to actually respond via a “umm but actually” type response
GPT is fairly useful but I definitely don’t trust it implicitly. Lol
I understood the answer, not the meme. I guess I wasn’t clear. Sorry internet friend. Clearly GPT was lacking some nuance too, as evidenced by some discussion ITT.
I’m pretty sure they operate on tokens not AST.
But now you do? I don’t understand what the image has to do with any of this.
Did I say that? It’s obvious that it’s a fairly nuanced as topics go, and GPT is not great at nuance. It doesn’t seem like it’s totally wrong though.
Anyhow I don’t rust, so it’s kinda irrelevant, just an interesting topic.
I didn’t know if you said that, that’s why I asked.
-
What happened to the White Gold Tower?
You would have to ask Roxanne Meadows
What’s that
This looks kind of like the Imperial City from Oblivion, which has a huge tower in the middle called White Gold Tower.
I think the bigger problem is that they are hard to write and sometimes break tooling.
Why would they need to be?
They download and execute code at compile time. If a dependency of a dependency of a dependency gets hacked and inserts bad code into their crate, developers around the world will get infected. It’s like curl2bash from a bunch of random sources every time you hit compile.
Infection has already happened a few times in NodeJS world and the general consensus was “this wasn’t as bad as it could’ve been” and “we should probably add 2FA to publishing accounts”. There are a few niche NodeJS alternatives that sandbox the npm install process, but they’re far from the mainstream.
With Rust dependency trees looking more and more like Javascript’s, I think it’s only a matter of time before a big crate like serde or reqwest will get infected by a supply chain attack and tens of millions of bitcoin will get stolen from developers’ machines.
It would be so easy for a desperate dev with a gambling debt and a small side project that major companies are leaning on to fall victim to extortion, and this risk is embedded in almost every programming language. It’s not just a Rust problem, and very few people are working on a solution right now (including the Rust devs, luckily!)
I don’t think this is a problem with proc macros or package managers. This is just a regular supply chain attack, no?
The way I understand it, sandboxing would be detrimental to code performance. Imagine coding a messaging system with a serve struct, only for serde code to be much slower due to sandboxing. For release version it could be suggested to disable sandboxingy but then we would have gained practically nothing.
In security terms, being prepared for incidents is most often better than trying to prevent them. I think this applies here too, and cargo helps here. It can automatically update your packages, which can be used to patch attacks like this out.
If you think I’m wrong, please don’t hesitate to tell me!
“Normal” supply chain attacks would infect the executable being built, targeting your customers, while this attacks the local dev machine. For example, the malware that got inserted into a pirated copy of XCode infected tons of Chinese iPhone apps, but didn’t do much on the devs’ machines.
Sandboxing wouldn’t necessarily lead to detrimental performance. It should be quite feasible to use sandboxing APIs (like the ones Docker uses) to restrict the compiler while proc macros are being processed. On operating systems where tight sandboxing APIs aren’t available this is a bigger challenge, but steps definitely can be taken to mitigate the problem in some scenarios.
In terms of security you should of course assume that you’ve been hacked (or that you will be hacked), but that doesn’t mean you should make it easier. You don’t disable your antivirus because hackers are inevitable and you don’t run your entire OS in ring 0 because kernel exploits will always be found anyway; there are ways to slow hackers down, and we should use them whenever possible. Sandboxing risky compiler operations is just one link in a long chain of security measures.
I personally don’t think they do, but an argument can certainly be made. Rust proc macros can run arbitrary code at compile time. Build scripts can also do this.
This means, adding a dependency in Cargo.toml is often enough for that dependency to run arbitrary code (as rust-analyzer will likely immediately compile it).
In practice, I don’t think this is much worse than a dependency being able to run arbitrary code at runtime, but some people clearly do.
I don’t know if it is a huge issue but it is definitely a nice to have. There are a few examples I can think of:
- I open the code in my IDE but build somewhere sandboxed. It would be nice if my IDE didn’t execute the code and can still do complete analysis of the project. This could also be relevant when reviewing code. Often for big changes I will pull it locally so that I can use my IDE navigation to browse it. But I don’t want to run the change until I finish my review as there may be something dangerous there.
- I am working on a WebAssembly project. The code will never run on my host machine, only in a browser sandbox.
- I want to do analysis on Rust projects like linting, binary size analysis. I don’t want to actually run the code and want it to be secure.
- I want to offer a remote builder service.
I’m sure there are more. For me personally it isn’t a huge priority or concern but I would definitely appreciate it. If people are surprised that building a project can compromise their machine than they will likely build things assuming that it won’t. Sure, in an ideal world everyone would do their research but in general the safer things are the better.
Analyzing without running might lead to bad situations, in which code behaves differently on runtime vs what the compiler / rust-analyzer might expect.
Imagine a malicious dependency. You add the thing with cargo, and the rust analyzer picks it up. The malicious code was carefully crafted to stay undetected, especially in static code analysis. The rust analyzer would think that the code does different things than it actually will. Could potentially lead to problematic behavior, idk.
Not sure how realistic that scenario is, or how exploitable.