What they are doing by this is of course to make any kind of subversion a hell of a lot harder and I welcome that. It serves as a strong signal that they want to protect my data and I welcome that. To me this definitely makes them the most trusted AI vendor at the moment by far.
History has shown, at least to date, Apple has been a good steward. They're as good a vendor to trust as anyone. Given a huge portion of their brand has been built on "we don't spy on you" - the second they do they lose all credibility, so they have a financial incentive to keep protecting your data.
Apple have never been punished by the market for any of these things. The idea that they will "lose credibility" if they livestream your AI interactions to the NSA is ridiculous.
What kind of targeting advertising am i getting from apple as a user of their products? Genuinely curious. I’ll wait.
The rest of your comment may be factually accurate but it isn’t relevant for “normal” users, only those hyper aware of their privacy. Don’t get me wrong, i appreciate knowing this detail but you need to also realize that there are degrees to privacy.
Also full disk encryption is opt-in for macOS. But the answer isn't that Apple wants you to be insecure, they just probably want to make it easier for their users to recover data if they forget a login password or backup password they set years ago.
> real-time location data
Locations are end to end encrypted.
Apple does first party advertising for two relatively minuscule apps.
Facebook and Google power the majority of the world's online advertising, have multiple data sharing agreements, widely deployed tracking pixels, allow for browser fingerprinting and are deeply integrated into almost all ecommerce platforms and sites.
> allows officials to collect material including search history, the content of emails, file transfers and live chats
> The program facilitates extensive, in-depth surveillance on live communications and stored information. The law allows for the targeting of any customers of participating firms who live outside the US, or those Americans whose communications include people outside the US.
> It was followed by Yahoo in 2008; Google, Facebook and PalTalk in 2009; YouTube in 2010; Skype and AOL in 2011; and finally Apple, which joined the program in 2012. The program is continuing to expand, with other providers due to come online.
https://www.theguardian.com/world/2013/jun/06/us-tech-giants...
E2E. Might not be applicable for remote execution of AI payloads, but it is applicable for most everything else, from messaging to storage.
Even if the client hardware and/or software is also an actor in your threat model, that can be eliminated or at least mitigated with at least one verifiably trusted piece of equipment. Open hardware is an alternative, and some states build their entire hardware stack to eliminate such threats. If you have at least one trusted equipment mitigations are possible (e.g. external network filter).
Just make absolutely sure you trust your government when using an iDevice.
cough* HW backdoor in iPhone cough*
There is such a thing as threat modeling. The fact that your model only stops some threats, and not all threats, doesn't mean that it's theater.
But any such software must be publicly verifiable otherwise it cannot be deemed secure. That's why they publish each version in a transparency log which is verified by the client and handwavy verified by public brains trust.
This is also just a tired take. The same thing could be said about passcodes on their mobile products or full disk encryption keys for the Mac line. There'd be massive loss of goodwill and legal liability if they subverted these technologies that they claim to make their devices secure.
A simple example of the sort of legal agreement I'm talking about, is a trust. A trust isn't just a legal entity that takes custody of some assets and doles them out to you on a set schedule; it's more specifically a legal entity established by legal contract, and executed by some particular law firm acting as its custodian, that obligates that law firm as executor to provide only a certain "API" for the contract's subjects/beneficiaries to interact with/manage those assets — a more restrictive one than they would have otherwise had a legal right to.
With trusts, this is done because that restrictive API (the "you can't withdraw the assets all at once" part especially) is what makes the trust a trust, legally; and therefore what makes the legal (mostly tax-related) benefits of trusts apply, instead of the trust just being a regular holding company.
But you don't need any particular legal impetus in order to create this kind of "hold onto it and don't listen to me if I ask for it back" contract. You can just... write a contract that has terms like that; and then ask a law firm to execute that contract for you.
Insofar as Apple have engaged with some law firm to in turn engage with a hosting company; where the hosting company has obligations to the law firm to provide a secure environment for the law firm to deploy software images, and to report accurate trusted-compute metrics to the law firm; and where the law firm is legally obligated to get any image-updates Apple hands over to them independently audited, and only accept "justifiable" changes (per some predefined contractual definition of "justifiable") — then I would say that this is a trustworthy arrangement. Just like a trust is a trust-worthy arrangement.
It stands to reason that that control is a prerequisite for "security".
Apple does not delegate its own "security" to someone else, a "steward". Hmmm.
Yet it expects computer users to delegate control to Apple.
Apple is not alone in this regard. It's common for "Big Tech", "security researchers" and HN commenters to advocate for the computer user to delegate control to someone else.
>for them to run different software than they say they do.
They don't even need to do that. They don't need to do anything different than they say.
They already are saying only that the data is kept private from <insert very limited subset of relevant people here>.
That opens the door wide for them to share the data with anyone outside of that very limited subset. You just have to read what they say, and also read between the lines. They aren't going to say who they share with, apparently, but they are going to carefully craft what they say so that some people get misdirected.