Find A Vulnerability In Apple Software; Lose Your License As An Apple Developer
from the kill-the-messenger dept
It appears that Apple is the latest company to take a "kill the messenger" approach to security vulnerabilities. Hours after security researcher Charlie Miller found a huge vulnerability in iOS, which would allow malicious software to be installed on iOS devices, Apple responded by taking away his developer's license.The obvious implication: don't search for security vulnerabilities in Apple products, and if you do find them, keep them to yourself.
First off, here's Miller explaining the security hole:
“I’m mad,” he says. “I report bugs to them all the time. Being part of the developer program helps me do that. They’re hurting themselves, and making my life harder.”And, no, this is not a case where he went public first either. He told Apple about this particular bug back on October 14th. Either way, this seems like a really brain-dead move by Apple. It's only going to make Apple's systems less secure when it punishes the folks who tell it about security vulnerabilities.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: blame the messenger, charlie miller, ios, security, vulnerabilities
Companies: apple
Reader Comments
Subscribe: RSS
View by: Time | Thread
PS: Another reason I am staying away from Apple and the shine.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re:
You want Linux to be Windows' only competition? Ick. We need more competition, not less.
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
[ link to this | view in chronology ]
False Blame
> vulnerabilities in Apple products, and if you do find
> them, keep them to yourself.
This is not at all the proper implication. Miller, the author, was not removed from the developer program for finding and reporting the bug. He was removed for knowingly creating and uploading an application to the App Store that *exploited* the bug! The application was sitting in the App Store for over a month.
Although unlikely, someone else could have downloaded the app, found the vulnerability and maliciously exploited it.
He broke the rules to which he agreed when signing up to the developer program. He was removed from the program as a result of that.
The true "obvious implication" to draw from this is: If you find a vulnerability report it through the appropriate means, create something that demonstrates the exploitation (if you so desire) but don't upload the app you know breaks the rules to the App Store.
[ link to this | view in chronology ]
Re: False Blame
If you find a vulnerability, then sell it to the highest bidder.
[ link to this | view in chronology ]
Re: Re: False Blame
Monkey, you need a dose of reality. Just because you can do something, doesn't mean it's a good idea to do it. Lots of other companies (Sun, Microsoft, etc) have already found this out. If you don't pay attention to the bug that's reported, and you don't give any feedback, security guys will force your hand by creating an app that leverages the flaw. They've been doing it for years. This is Apples first time through the security line, because this is the first time they have had enough users to make it profitable to hackers. We're about to find out how virus proof Apple isn't.
[ link to this | view in chronology ]
Re: Re: Re: False Blame
The author did not report the bug and wait. He didn't tell Apple he found an exploitation and has an app waiting to demonstrate it. He didn't upload the app to the App Store after waiting a week, month... or however long.
The reality is Miller found an exploit, created an app, and uploaded it to the App Store. After that, he made public that he had found an exploit and demonstrated it using the publicly available app (which had been there for over a month).
There was no bug reported. There was no opportunity to force the corporate hand. What you don't do is exploit that bug first in the working sandbox first.
Did we not read the last paragraph in my original post? The part that points out that one *SHOULD* report bugs and create something that demonstrates the exploitations?
[ link to this | view in chronology ]
Re: Re: Re: Re: False Blame
I think this points out how wrong you are. He told Apple first, then he went public with it. It seems reasonable to me. They don't listen otherwise. Tell them first and then make them take action by telling everyone else. Unfortunately, Apple acted foolishly in response. I think Apple doesn't want people to know that their products aren't as secure as they claim they are. Lack of confidence in the brand can be costly.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: False Blame
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: False Blame
[ link to this | view in chronology ]
Re: False Blame
Are you suggesting that a miscreant could have discovered a vulnerability by downloading an app with very simple behavior, decompiling it, then sifting through the code looking for Easter eggs? Anyone with the skill to do that could have found the vulnerability far more easily by studying the code-signing protocol, the way Miller did.
Apple's actions may be defensible legally, but only legally.
[ link to this | view in chronology ]
Re: Re: False Blame
Understanding the nature of what the vulnerability was (in hindsight) illustrates that it could have been discovered by very simple means. As unlikely as the an individual picking this app is.
The discussions over at Gizmodo on this story are at least somewhat insightful from a technical point of view, instead of people just running at it from a "big company keeping down the little guy" point of view.
Apple as a sandbox called the "App Store". They've made rules for playing in the sandbox; like "don't piss in the sandbox". If you piss in the sandbox, you're not allowed to play in the sandbox for a little while.
> Apple's actions may be defensible legally, but only
> legally.
You're stretching the notion of "legally" a bit there. We're taking a private company here. If Apple wanted to refuse an app because they didn't like the color scheme, they could. They don't have to have a reason to reject an app from the store -- they just can, because they want to.
This isn't a moral question. There are rules set up for developers who wish to participate in the App Store. Miller *BROKE* those rules!
Should Apple be working their butts off to fix this? Yes.
Should they have fixed it sooner? Dunno. Maybe they've been trying since it was discovered. Maybe they've been busy playing table tennis instead.
Could Apple have looked the other way? Could they have punished him a little less? Could they still reverse or revise the decision? Yes. Yes. Yes.
Do they need to? No. A developer knowingly introduced an app into the store that exploited a security flaw, and was punished according to the agreements he signed.
[ link to this | view in chronology ]
Re: Re: Re: False Blame
If I'm parsing this correctly, you're saying that once the app was in the store, the vulnerability was as likely to be discovered by someone picking the app (from all the apps in the store), decompiling it, analyzing the results and discovering the exploit, as by studying iOS. That is absurd. I have lost count of the times I've heard of an independent researcher discovering a hole in a large, supposedly secure IT system; I have never heard of someone discovering a hole by deconstructing an app or other published software which shows no sign of malicious behavior.
(As for the rest of your argument, I really can't make any sense of it. You seem to be agreeing with me, then claiming victory.)
[ link to this | view in chronology ]
Re: Re: Re: Re: False Blame
> (As for the rest of your argument, I really can't make any
> sense of it. You seem to be agreeing with me, then claiming
> victory.)
If you are referring to the "actions may be defensible legally" aspect of the conversation, then yes - I am agreeing with you. I hadn't claimed (or meant to) anything to the contrary.
The simple fact is Miller broke his agreement that he signed with Apple when he uploaded a compromised application to the app store. As a result, his license was suspended per that agreement.
Others have pointed this out from slightly different angles...
See John Fenderson's post in this thread, or the "Two things" or "He broke the App store agreement..." threads below.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: False Blame
"You broke the rules and breaking the rules is bad and you need to be punished."
"So, did I actually do anything wrong?"
"You broke the rules and breaking the rules is bad and you need to be punished."
[ link to this | view in chronology ]
Re: False Blame
> the app, found the vulnerability and maliciously
> exploited it.
Considering that (from TFA) the guy had worked for the NSA, I suspect he would have prevented that possibility. The App's purpose was to demonstrate the bug, not exploit it.
The ultimate demonstration is having a signed, trusted app do something untrusted.
> He broke the rules to which he agreed when signing
> up to the developer program.
Without doing so, it may be impossible to confirm and demonstrate the bug.
[ link to this | view in chronology ]
Re: Re: False Blame
> demonstrate the bug.
In part, I agree. To expand on something you mentioned earlier in your post...
> The ultimate demonstration is having a signed, trusted
> app do something untrusted.
I would expand on that and add that getting such an app into the App Store (i.e., getting it approved by Apple itself) is the ultimate demonstration. You may have been implying this already by "signed, trusted app".
To show that a signed app, running in the iOS sandbox, could do something untrusted does not require its presence in the App Store. You can demonstrate this on a development platform.
It becomes tricky when you add "trusted" into it -- as you point out. At what point is it "trusted"? When Apple approves it for the App Store?
I'll say "yes" to the above. The app has passed the final review, so it can't be any more trusted then that.
The bump in the road here is that the exploitation was not previously disclosed. First step is to reveal the vulnerability and give the appropriate company (Apple, in this case) time to fix it (how much time is a different discussion).
If the exploitation had been disclosed, it would have provided Apple the opportunity to (1) fix it, or at least (2) watch for it and deny that "trusted" status so it never made it to the App Store.
But, okay. People don't agree with that sentiment. We'll switch up the argument -- creating the app and getting it onto the App Store to demonstrate the full vulnerability was the right course.
Miller should have, or could have: (1) pointed out the issue to Apple privately, (2) publicly exposed the issue and removed the app, or (3) discretely waited until the security conference until exposing it. Unfortunately, he made it public on Twitter well in advance.
I *don't* believe he meant this to be malicious in any way. But he isn't just a "messenger".
[ link to this | view in chronology ]
Re: Re: False Blame
Miller did the right thing in his actions. They were for the greater good. But the right thing he did was to fall on his sword, because Apple also did the right thing in terminating his developer agreement. Miller deliberately violated a very important term in the contract, and to ignore that fact would also make Apple look bad by, in a sense, playing favorites.
[ link to this | view in chronology ]
Re: Re: Re: False Blame
I hope my obvious lack of Internet commenting is apparent, because your post summarizes exactly what I've obviously not been able to express in umpteen replies.
[ link to this | view in chronology ]
Re: False Blame
[ link to this | view in chronology ]
Re: Re: False Blame
Part of what he was testing was the approval process, in that "trusted" apps could do untrusted things. DannyB points out the same notion above, to which I also commented on.
But the ability to demonstrate how to run untrusted code from within an app did not require it to be in the App Store. The ability to get that app approved was, yes.
But from within the development community, demonstrating the process outside the App Store would have not required a significant mental leap by anyone to see the potential.
Had he demonstrated this on an app outside the App Store first, then waited (how long is a different conversation) before testing the app approval process, this would absolutely be a different story.
[ link to this | view in chronology ]
Re: Re: False Blame
Nope, every application is reviewed by humans. Remarkably stupid humans based on the number and types of rejections.
I tried to put a simple app in the store and it went through 5 rejections. The best one was "Use of a folder icon." The design guide says you shouldn't make any UI which implies the existence of a directory structure ... because if Apple users ever accidentally learn how computers work they won't be Apple users anymore.
[ link to this | view in chronology ]
Disappointed
Anyway...
I do want to ask the security community a question, though. How fast does a large corporation have to respond to a security issue before going public? Is one month a realistic timeframe when there are development cycles that can't always to respond to the countless number of submissions by the community? Google, Microsoft, Apple and others aren't immune to the volume of requests that come in on any given day.
I'm not sure one month is enough and while it's good that the issue was discussed in publich without detailing the flaw exactly, do people have to be so trigger happy with disclosure when, as a security analysist, he's said it's a pretty obscure bug. Dangerous... but very obscure.
What if a three strikes rule (I almost shudder at saying such a thing in these forums when its such a hot topic in other circles) is a way to compromise on disclosure? Can security researchers agree that if after a third time following up with a vendor on software defects - with one month span in between emails - they go public?
It just seems to have disjoined responses by different security researchers on these types of bugs. Otherwise, Apple et al are left putting out fires in their development cycle and disrupting change on the threat someone doesn't think they move fast enough (which, again, could be different from person to person).
[ link to this | view in chronology ]
Re: Disappointed
[ link to this | view in chronology ]
Re: Disappointed
Any procedure generally held in respect by the community usually has explicit protocol to follow if the maintainer in question fails to address the vulnerability. For example, a tiered approach, where in the first month the vulnerability is disclosed to the company, in the second month it is disclosed to security researchers of at least 3 respected security firms, and in the third month is disclosed to the entire internet, including the black hat community.
Apple probably feels it acted in accordance with its policy regarding the app store, but I don't think they're being very smart about the matter. They're drawing publicity to a security vulnerability they haven't fixed yet.
[ link to this | view in chronology ]
Re: Disappointed
Response time should be near zero, implementation is a different story.
If something is brought to your attention you need to have a verifiable process for dealing with and prioritizing it.
[ link to this | view in chronology ]
Re: Disappointed
I'm guessing Apple shrugged off his concerns and didn't give him any kind of response (let alone a timeline for a fix) which pretty much forces your hand. While announcing these to the public is temporarily dangerous, failure to inform people is far worse.
[ link to this | view in chronology ]
Re: Disappointed
Someone (99% of the time a customer) reports a security vulnerability. This person is responded to immediately the problem is handled with them the same way any other software fault is handled: an ongoing dialog while our internal process continues.
This does not necessarily mean the fault is solved immediately -- maybe not even within a month. It depends on the fault. Some things (even things that seem easy) require a tremendous effort to resolve.
If the reporter were to publicly disclose the problem prior to our resolution, it would not force our hand or make things get fixed faster. Security problems are already given maximum attention. All it would do would be to increase the risk to other customers until the problem is resolved.
Not all companies take such issues so seriously, of course, and the threat of public disclosure may spur them into action. We try to reassure the reporter that things are being taken seriously through dialogue in part to prevent premature disclosure.
[ link to this | view in chronology ]
Re: Disappointed
Are you seriously saying that companies should have 3 months to fix 'zero-day' exploits, and that the customers 'exploited' during those 3 months should have no recourse?
If you go to the store and buy a product, get home and it doesn't work, would you allow the store 3 months, with 1 month between each e-mail before they provide you with a working product?
[ link to this | view in chronology ]
This is exactly how Microsoft once responded
The result was that the policy of those finding vulnerabilities became to publicly disclose the vulnerability immediately.
But then vendors cried that it was irresponsible to publicly disclose these vulnerabilities before they could be patched.
So then the policy became to secretly disclose the vulnerability only to the vendor so they could fix it.
Then vendors just sat on those reports and did not fix the vulnerabilities.
So then a policy became to disclose to the vendor, and then after a reasonable period, regardless of vendor (in)action, disclose to the public.
And yes, in the last three decades, there have been others who have been punished for trying to help a vendor fix a security problem in their software.
The moral of the story: disclose anonymously to the vendor. Later, disclose anonymously to the public.
Some vendors figured out to PAY bounties to those who discover vulnerabilities instead of punishing them.
[ link to this | view in chronology ]
Re: This is exactly how Microsoft once responded
There's no reason at all to give these companies the benefit of free professional consulting. They've conclusively proven that they will (a) ignore (b) deny (c) threaten (d) punish -- so why bother with them? Just publish on the full-disclosure list or elsewhere, and let them reap the whirlwind.
[ link to this | view in chronology ]
Re: Re: This is exactly how Microsoft once responded
Can you cite incidents where Apple has ignored, denied, threatened or punished someone for reporting vulnerabilities through the proper channels? Answer: NO, because there are no incidents like that.
[ link to this | view in chronology ]
Re: Re: Re: This is exactly how Microsoft once responded
That's obvious nonsense, of course. Merely reporting facts accurately makes NOBODY the villain. Clearly, those to blame for any adverse consequences are (a) those who take malicious action based on the facts and (b) those whose incompetence, laziness, and stupidity are directly responsible for the situation.
Can you cite incidents where Apple has ignored, denied, threatened or punished someone for reporting vulnerabilities through the proper channels?
If you can't, then you're clearly far too uninformed to be worthy of participation in this discussion. Please run along and provide yourself with the appropriate remedial education before continuing.
[ link to this | view in chronology ]
Re: Re: Re: Re: This is exactly how Microsoft once responded
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: This is exactly how Microsoft once responded
People aren't remedial. Remedial means "intended to remedy, as in a shortcoming".
[ link to this | view in chronology ]
Re: Re: This is exactly how Microsoft once responded
That is an irresponsible policy.
A more responsible one is to disclose to the vendor and then disclose publicly after a short window for the vendor to fix it. It is not a matter of free consulting. (In fact, I like that Google has paid bounties to people who find these kinds of things.) It is a matter of social responsibility.
If you disclose to the public with no prior notice to the vendor, then how is this getting you any monetary gain? It's not. So I don't see your argument about "free consulting". Yet immediate disclosure is clearly less socially responsible because it gives the bad guys an opportunity to exploit it.
As you say, the vendor has millions of dollars. A short window to fix it should be plenty -- regardless of the severity of the problem -- because the vendor has vast resources to get it fixed quickly. Assuming the vendor feels a responsibility to fix it at all.
Whether or not the vendor fixes it in a short time, you should publish your findings.
[ link to this | view in chronology ]
Re: Re: Re: This is exactly how Microsoft once responded
Doesn't this presume they're not already exploiting it?
[ link to this | view in chronology ]
Re: Re: Re: Re: This is exactly how Microsoft once responded
If a larger group was exploiting, it would have become public already.
Complete public disclosure enables anybody with skill and malicious intent to exploit it.
So, I have to agree that the company should be given a warning shot "fix this or else". Companies will eventually catch on and try to avoid the "or else".
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: This is exactly how Microsoft once responded
Wrong.
There are some very healthy marketplaces where vulnerabilities are bought and sold and traded. They're not public -- well, not very public. But they exist, and it's quite routine for security holes and sample exploit code to be sold -- either once, to the highest bidder, or repeatedly, to anyone with sufficient funds.
And because these known marketplaces exist, we must posit the existence of others that may be entirely unknown except to the very few who frequent them. These no doubt cater to exclusive clients who have demonstrated a willingness to pay handsomely.
Of course any of the buyers in either kind of marketplace could then re-sell what they have -- or use it themselves. Or both.
[ link to this | view in chronology ]
Re: Re: Re: This is exactly how Microsoft once responded
First, you're presuming that the bad guys don't already have it. In all likelihood, they do. And it's further likely that they've had it for a while. Keep in mind that the bad guys have long since demonstrated superior intelligence, ingenuity and persistence when compared to the software engineers at places like Microsoft and Oracle and Apple. Their livelihood depends on it. Given this, failure to disclose publicly immediately provides considerable aid to the bad guys, who can continue exploiting security holes against victims who don't even know they're victims and thus are unlikely to take steps to defend themselves.
Second, you're presuming that "disclosing to the vendor" is not the same as "disclosing to the bad guys". But of course in many cases it is, due to pitifully poor vendor security -- or the simple expedient of having a well-paid informant on the inside. (If I were a blackhat, I'd certainly cultivate contacts at major software vendors. I'm sure they have staff that are underpaid and/or disgruntled, and thus readily susceptible to bribes. Or blackmail.) So if they didn't already know...they'll know now. And once again, victims will be the only ones left in the dark.
The only way to level the playing field is to fully disclose everything immediately and publicly. It's unlikely to be news to the more talented bad guys and it will at least inform victims and potential victims what they're up against. Not doing so is a major win for blackhats.
[ link to this | view in chronology ]
Re: Re: This is exactly how Microsoft once responded
Spoken like someone who has never written any code in his/her life.
And let me preempt any "I'm a full time programmer" response by saying, "No, you are not, or you would have never made such a stupid uneducated statement."
[ link to this | view in chronology ]
Re: Re: Re: This is exactly how Microsoft once responded
No, I'm not a full-time programmer, although I have been. What I am is paranoid, careful, meticulous. exacting, and thorough. And I expect no less of others -- but clearly, the overwhelming majority of people who program are thoroughly incompetent.
[ link to this | view in chronology ]
Re: Re: Re: Re: This is exactly how Microsoft once responded
(By which, I mean that it's sickening that such an idea became outmoded, because in practice it saves enormous amounts of money and immediately and permanently negates the present dilution of the programmer's trade.)
[ link to this | view in chronology ]
Re: Re: Re: Re: This is exactly how Microsoft once responded
Which still doesn't guarantee bug free code.
Let's not get confused, I'm not advocating that customers should be used as your first form of testing, far from it. The code should be reviewed, unit tested, regression tested, black box tested, and then go through multiple beta phases.
But the flippant notion that you can write 100% bug free code the first time every time is asinine and only a person with an astounding lack of intelligence would make such an assertion.
[ link to this | view in chronology ]
Re: Re: Re: This is exactly how Microsoft once responded
Spoken like someone who doesn't write good code.
[ link to this | view in chronology ]
Re: Re: Re: Re: This is exactly how Microsoft once responded
I'll take that bet, I'll pay you $100,000,000 if you've written code for more than 2 years and managed to produce a product as complicated as an operating system which has zero bugs ... yeah, that's what I thought.
[ link to this | view in chronology ]
C'mon if this was a water poisoning would anybody accept that the problems be treated secretly and be disclosed after the whole mess was cleaned up?
[ link to this | view in chronology ]
Re:
Poison water hurts people immediately whether disclosed or not.
An un-exploited vulnerability hurts no one as long as it remains un-exploited by the bad guys.
Giving the vendor a chance to fix it is likely to prevent harm. (IF the vendor fixes it, and IF no bad guys find it in the meantime.)
Your analogy is wrong. Water poisoning is like the bad guy exploiting the vulnerability. That is the actual poisoning. The vulnerability is a way in which poison can be introduced. The knowledge of the vulnerability should be disclosed to those who can fix the vulnerability and prevent poison from being introduced into the water system.
[ link to this | view in chronology ]
Re: Re:
This is seriously wrong.
First, it presumes a nonsensical situation: that a vulnerability will remain unexploited. As everyone with any security clue at all knows, that's not going to happen. Now...it may be exploited poorly, or ineffectively, but it is absolutely inevitable it will be exploited.
Second, existence of vulnerability X is often indicative of vulnerabilities Y and Z. And vice-versa. In other words, it is foolish to presume that a vulnerability known to the vendor is the ONLY one. It's equally foolish to presume that a vulnerability known to the bad guys is the ONLY one. Bad code usually doesn't have just one bug.
Third, vulnerabilities are there for anyone with sufficient knowledge and determination to find. But they're also there for anyone with sufficient funds (to purchase) or guile (to steal). (Yes, I know that "steal" isn't quite the right verb, but a better one doesn't suggest itself immediately.) It is one of the massive, arrogant conceits of those who oppose immediate full public disclosure that they actually believe these things can be kept secret.
They can't.
They won't.
The biggest beneficiaries of any attempt to conceal any part of this process are the very people it purports to defend against. Secrecy serves their purposes well.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
Shoot the messenger.
[ link to this | view in chronology ]
Re: Re: Re: Re:
Unfortunately, he wasn't a messenger.
Miller did not inform Apple of the bug before hand. He did not provide an opportunity to acknowledge and fix the bug.
He knowingly uploaded an app that violated the terms and introduced a vulnerability. He then, other a month later, made a public announcement that the bug existed and the app to prove it is already in the App Store.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
Can you cite any evidence to show when the app was uploaded and when Apple was notified?
Also, how can you prove that a vuln exists without a proof-of-concept exploit? Is there any way to confirm the vuln without uploading something to the app store?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
True. After he had already created and gotten an app approved into the App Store. So the vulnerability was public.
> Can you cite any evidence to show when the app was
> uploaded and when Apple was notified?
From the original Forbes article:
"Miller had, admittedly, created a proof-of-concept application to demonstrate his security exploit, and even gotten Apple to approve it for distribution in Apple’s App Store by hiding it inside a fake stock ticker program..."
> Also, how can you prove that a vuln exists without a
> proof-of-concept exploit? Is there any way to confirm the
> vuln without uploading something to the app store?
It depends on what you claim the ultimate vulnerability is. This is being discussed in greater detail in other threads there people are ripping me apart.
There are two vulnerabilities here:
1) The ability to actually run untrusted code
2) The ability to get an app approved in the App Store that exploits that.
In answer to your question for those two points:
1) Yes. This can be shown outside the App Store.
2) No, obviously not, since getting the app approved is the end-goal.
Had he made #1 public first then demonstrated #2 after giving Apple a chance to respond this would absolutely be all about Apple covering something up.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re:
He made malicious code available to the public. This puts it in the hands of others who would use it for evil. And now they know to look for it. Hopefully the exploit is patched before that happens.
Anyone know if he tested his proof of concept by running code on someones device? That would be a serious issue whether the code was malicious or not.
[ link to this | view in chronology ]
Re: Re: Re:
You said: "The flagrant disregard of mutual agreements has started more than one war."
It is true that this is often seen as the flash point. But really, it is often the one sided nature of a 'negotiating process' that sets up the later conflict. A signed contract from the weaker party makes it a lot easier to invade. Even better if the contract that was signed had impossible to meet terms.
[ link to this | view in chronology ]
Re: Re:
I've been increasingly disturbed by the courts desire to hold contract law above all other laws. There are still some rights you cannot contract away but they seem to be diminishing. I understand why contracts have gotten where they are today, I just don't understand how this is sustainable. Most individuals are party to dozens if not hundreds of contracts at any given time and it is ridiculous to think that people can truly understand and agree to that many contracts (home (probably 5 - 10 contracts alone), phone, cable, car, software, etc.)
[ link to this | view in chronology ]
I have said this crap before...
Also, why is the video unlisted? Make it public so Apple can find out why it's a bad idea to punish developers.
[ link to this | view in chronology ]
Re: I have said this crap before...
[ link to this | view in chronology ]
Yet another example of Apple's poor security model
[ link to this | view in chronology ]
Re: Yet another example of Apple's poor security model
Then do the same exercise with M$. Right again. Fun, no?
[ link to this | view in chronology ]
Are you all really that sure of a reported timeline?
Does it really matter regardless?
The basic fact that Miller was able to get an application approved and in the store that leveraged a very simple, and in hindsight, obvious hack in iOS is what should actually be setting off the alarms for everyone. I see this as the smallest tip of the iceberg starting to show above the waters for Apple. From a practical standpoint they'd be better off slapping Millers' wrist publicly then thanking him with an incentive behind the scenes. Perhaps even adding an associated non-disclosure and prioritizing any bug notices he sends them from now on. Technically what he did was in fact against the ToS but the point is that ill intentions are NOT going to be blocked by a ToS! Apple really needs to step up their own security and responses or things are going to get ugly for them really fast.
[ link to this | view in chronology ]
Re: Are you all really that sure of a reported timeline?
Well said, thank you
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Two things
If Apple did not kill his account and allowed him to upload more apps that did exploit other security holes to harass iPhone users, Apple could be liable because they knew he was someone who had published a malicious app in the past and took no action to prevent it in the future.
Perhaps they could have just flagged the account to make all apps submitted by him are rejected until the flaw is fixed, but I doubt their system is that sophisticated.
If Apple stops there and does nothing else, shame on them. However, I suspect they will patch this in an update soon.
[ link to this | view in chronology ]
He broke the App store agreement...
Or is it just that you want someone to jump on, especially if they are a big company with lots of fanboys/girls.
He broke his agreement that he signed with Apple when he uploaded a compromised application to the app store. Period.
Apple was well within it's rights to retaliate for this action.
It has nothing to do with whether it's right or wrong or it made his life harder.
He broke a contract that he, let me emphasize this; that he signed. Period. There is no other issue involved.
[ link to this | view in chronology ]
Re: He broke the App store agreement...
Sure their response might not have been appropriate, but it's not like the guy's was... "Ignore me and I'l post hacks disguised as apps, then complain when I get caught". Awesome. And you guys all bit and aren't letting go.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
psssh
[ link to this | view in chronology ]
Apple is like the Pope if you think about it sideways.
Apple tells you everything Apple is magical and just works, to admit a flaw is anathema to them.
The Pope is the word of God, everything he says is true and just so. It is impossible for the Pope to admit a mistake because of Papal infallibility.
Apple is so focused and concerned about their image, that they are alienating consumers. The sad thing is these consumers stick with Apple, even when they are forced to bear the burden of Apples failure.
If you shoot the good guy who comes and tells you your flawed, do not then cry when they stop showing up. Do not complain when your myopic staff can not see the flaws and then less than good people will exploit for profit.
What really needs to happen is there needs to be a system put in place where exploits can be submitted, verified, send to the affected company with a deadline of fixed or not this goes public in X days. They can hate on the faceless group then, rather than target the person who found the bug.
Some flaws should not be suppressed, and it is far cheaper and easier for a corporation to launch lawsuits to crush the finder of the bug, than to fix the flaw. Then when someone else finds and exploits that flaw they can point at the original reporter to shift the blame to, didn't warn us, didn't give us enough time, released it to hurt us...
So what exactly is the benefit to looking for flaws if your not out to exploit them? You get treated like crap, sued, and painted as evil. Are we surprised some people just skip to being evil? How many times can you kick them before they kick back?
[ link to this | view in chronology ]
Apple Negatively Attacks A Developer
[ link to this | view in chronology ]
Off Course Apple
[ link to this | view in chronology ]