Some security thoughts for nerds:
- STM32F7 doesn’t have a Memory Management Unit (MMU) (may have a Memory Protection Unit(MPU) which isn’t as good) so a Python virtual machine is a great way of separating applications
- it’s highly likely that there will be ways for malicious code to break out of the expected behaviour of the Python sandbox
- storing code on a device that you buy off the internet isn’t great
- the Python code on the modules could be signed using asymmetric cryptography (eg ECDSA) so that the code can be verified before execution
- EEZ could hold a master private key, and issue certificates to developers that let developers sign their own code
- maybe unsigned code could run if the user clicks an “OK, run it this time” button or configures their unit into some ‘developer’ mode
Ok, "nerds" are welcomed with their comments here, and I have to discuss this with Martin, too.
Nerd here. Opinionated nerd with software security experience, including design and implementation of software provisioning secured by public key encryption.
I would really rather not have to jump through hoops to run my own code on my own BB3. The target user isn't naïve.
I'd like to point out that there are currently 129 backers, with less than two weeks remaining on the campaign. This does not make the BB3 a substantial attack target. There's a joke: When you are in a group being chased by a bear, you don't have to run faster than the bear, you just have to run faster than the slowest person in the group. Are people going to spend time attacking the BB3 when they could be trying to take over npm or pypi packages with tens or hundreds of millions or more of potential targets?
If you start signing other people's code, you are asserting something about it. Let's pretend you set this up and start signing code. What precisely does that signature represent? Does it mean you have audited the code for safety, and are confident that it won't break something? If someone is writing malicious code, are you more likely to catch it than someone with expertise in the domain in which the code applies? If you sign malicious code because you missed the malicious nature, and it causes harm, have you incurred liability by explicitly signing the malicious code?
Encryption is not magic pixie dust that makes code secure.
You have to start with what the actual threat is, and then design a system to protect against that threat. I don't see a threat that this proposal actually protects against. I understand the theoretical danger of running "untrusted code" on a power supply, but sprinkling encryption over that code is not an effective protection mechanism.
I did warn that I'm an opinionated nerd here. ☺