Apple could hand over the code for review—though this is not something it has said it will do. Researchers can also try to reverse engineer the feature in a “static” manner—that is, without executing the actual programs in a live environment.
Realistically, however, all of those options have at least one major problem in common: They don’t allow you to look at the code running live on an up-to-date iPhone to see how it actually works in the wild. Instead, these methods still rely on trust not merely that Apple is being open and honest, but also that it has written the code without any significant errors and oversights.
Another option would be to grant access to the system to members of Apple’s security research device program in order to verify the company’s statements. But that group, made up of researchers outside of Apple, is a highly exclusive, constrained program with so many rules on what researchers can say or do that it doesn’t necessarily solve the problem of trust.
That leaves really only two options for researchers who want to peer inside iPhones for this kind of thing. First, hackers can jailbreak old iPhones using a zero-day vulnerability. That’s difficult, expensive, and can be shut down with a security patch.
“Apple has spent a lot of money trying to prevent people from being able to jailbreak phones,” Thiel explains. “They’ve specifically hired people from the jailbreaking community to make jailbreaking more difficult.”
Or a researcher can use a virtual iPhone that can turn Apple’s security features off. In practice, that means Corellium.
There are also limits as to what any security researcher will be able to observe, but a researcher might be able to spot if the scanning goes outside of photos being shared to iCloud.
However, if non-child abuse material makes it into the databases, that would be invisible to researchers. To address that question, Apple says it will require two separate child protection organizations in distinct jurisdictions to both have the same CSAM image in their own databases. But it offered few details about how that would work, who would run the databases, which jurisdictions would be involved, and what the ultimate sources of the database would be.
Thiel points out that the child abuse material problem that Apple is trying to solve is real.
“It’s not a theoretical concern,” Thiel says. “It’s not something that people bring up just as an excuse to implement surveillance. It is an actual problem that is widespread and needs addressing. The solution is not like getting rid of these kinds of mechanisms. It’s making them as impermeable as possible to future abuse.”
But, says Corellium’s Tait, Apple is trying to be simultaneously locked down and transparent.
“Apple is trying to have their cake and eat it too,” says Tait, a former information security specialist for the British intelligence service GCHQ.
“With their left hand, they make jailbreaking difficult and sue companies like Corellium to prevent them from existing. Now with their right hand, they say, ‘Oh, we built this really complicated system and it turns out that some people don’t trust that Apple has done it honestly—but it’s okay because any security researcher can go ahead and prove it to themselves.’”
“I’m sitting here thinking, what do you mean that you can just do this? You’ve engineered your system so that they can’t. The only reason that people are able to do this kind of thing is despite you, not thanks to you.”
Apple did not respond to a request for comment.
Recent Comments