Back doors that would be trivially found in OSS: https://pierrekim.github.io/blog/2020-07-07-cdata-olt-0day-vulnerabilities.html
I worked for Intel and had access to FW, and judging by comments so did half of /r/netsec and two thirds of HN. So any conspiracy would be very hard to hide, as it would involve unbelievably large amount of privacy-oriented people. And I'll repeat myself, even if your goal is to have a backdoor capability for all recent Intel CPUs, why would you choose to implant a new bug, when there's already so much (buggy) code running at ring subzero privileges? Wouldn't it make more sense to employ that army of math wizards, physics grads and reverse engineers which NSA spends so much resources hiring and training? Independent researchers have found so many high impact bugs in Intel's hardware, why would you expect any less from the government that's out to get you? tl;dr: embedded systems are so buggy that there is no need to add any intentional backdoors
“While DSP chips provide a relatively economical solution that allows mobile phones to provide end users with more functionality and enable innovative features—they do come with a cost,” researchers from security firm Check Point wrote in a brief report of the vulnerabilities they discovered. “These chips introduce new attack surface and weak points to these mobile devices. DSP chips are much more vulnerable to risks as they are being managed as ‘Black Boxes’ since it can be very complex for anyone other than their manufacturer to review their design, functionality or code.”
https://archive.org/details/hopeconf2020/20200725_1000_Hacking_a_Human_Mind_in_Conversation.mp4 On how low has changed over time from consumer protecting and innovation focused Trademark/Copyright/Patent laws to current IP laws and how this made reverse engineering illegal and how it will make fuzzing also illegal in the future.
IT is the only field in which this is practiced: No one builds a bridge or a hospital and keeps the composition of the steel or the equations used to calculate load stresses a secret. It is a frankly bizarre practice that leads, time and again, to grotesque security defects on farcical scales, with whole classes of devices being revealed as vulnerable long after they are deployed in the field and put into sensitive places.
"but some quick Googling showed that it wasn't open source, which suggested to me that the code had probably undergone substantially less security review, and hence probably had more low-hanging bugs in it, than the open-source parts of the kernel."
"Security researchers rejoice when Apple accidentally releases symbolicated kernelcaches or development libraries, but this is just because it saves time reversing, not because it makes things newly reversible. Any capable attacker will find bugs regardless of the presence or absence of symbols; all the lack of symbols does is keep the bug away from eyes (like mine) that might report it. Hence, withholding symbols is an incredibly weak protection, only deterring the lowest tiers of attackers and serving to make the bugs that have been found last longer"
Secrecy of crypto algo vs publishing crypto algo and making it keyed with a secret.