I think there's also room to argue that if he thought that the phone was kaput, i.e. that it was not *possible* to recover the data from it, then the act of abandoning the phone did not constitute an act of abandoning the data on the phone. (An analogy to non-digital form would be abandoning a wallet with cards which had gotten so wet that the ink had run and they had become unreadable.)
That doesn't seem to have been raised in court, however, and I'm not sure there aren't valid reasons for the court to reject that line of reasoning in any case.
I've now seen it happen on a the recent Aussie-catering-company article, and there are still several of the recent large-block-of-Chinese-text spam posts there; I've flagged them as spam, but they aren't hidden yet.
Hypothesis: Something is going through the hidden-because-flagged posts looking for ones which are actually blatant spam, and deleting what is found.
I think his point in the observation/assertion you cite is more that "if you give the government the power to do X good thing when a party you like is in power, the government can also then do reverse-of-X bad thing when a party you don't like is in power".
I.e., that giving power to government is dangerous, and that you shouldn't give government authority on the basis of "I trust that you won't abuse this".
That principle is true enough and fair enough - it's just that he takes it to an unreasonable, unbalanced degree of absolutist extreme.
This is weird. I just reloaded this page to check whether there had been new comments, and the listed "number of comments" immediately below the article went down by one - from 38 to 37.
When markdown is enabled, the spacing between paragraphs is different; whatever HTMLizing path is used for markdown leaves less space between paragraphs than the path used without markdown.
Personally, I think the spacing from the non-markdown approach is preferable, but even if you disagree with that it's still unavoidable that the difference does exist.
The proper way to do whistleblowing is to inform the people who have authority over the people who are committing and/or authorizing the wrongdoing, without letting those latter people know you're doing it.
If you can do that by going through channels, then going through channels is the right thing to do.
Otherwise, any method that gets the message through is potentially acceptable.
In this case, the person who appears to be committing and/or authorizing the wrongdoing is the President of the United States of America, to whom everyone else in the executive branch of the government answers; the only people who have authority over him are the American people themselves, i.e., the public.
As such, the only way to blow the whistle on wrondoing in the White House is to report it to the public - and the most effective way to do that is to go through the news media.
(There's an argument to be made about reporting it to Congress instead, but given how many people in Congress support the President, that would arguably be tantamount to reporting it to some of the people who are authorizing or approving of the wrongdoing.)
To have "wrong" thoughts is indeed called "thought crime", but it is not "pre-crime" or "thinking of things associated with crime".
"Thought crime" is the idea that having certain thoughts is itself a crime. It's not "you're thinking about murder, and murder is a crime, so we're going to arrest you"; it's "you're thinking that the Supreme Leader might not be perfect, and thinking that the Supreme Leader is not perfect is a crime, so we're going to arrest you".
He was quoting what an imaginary person being charged with copyright violation over downloaded songs might say, with the phrasing chosen to point out the problems with bots that automatically file DMCA notices against URLs that meet certain criteria.
Re: Re: "the bill serves no purpose, and Congress shouldn't waste its time on it" -- Pffft! Best we can hope for is waste their time!
I suspect that the AC would see your point 2 as "reduc[ing] the right of creators", since it would reduce the time for which the creators are recognized as having the right of control.
Be careful about that, though; while the best ranked-preference systems out there (basically, the Condorcet method) don't have this problem, the ones most popularly known under the "instant runoff" moniker do not entirely eliminate the perverse-incentive effect which leads to strategic voting.
The details are in the articles at those links, but basically, it is impossible to design a voting system which does not fail at least one of a set of specified criteria. Every voting system I've seen described except for the Condorcet method avoids this problem by sacrificing the criterion known as monotonicity, i.e., that voting for A over B will never increase the odds of B winning; as I understand matters, the Condorcet method instead sacrifices the criterion known as unrestricted domain, in that it can sometimes result in a cyclic loop - in which A defeats B and B defeats C and C defeats A, so that there is no definable winner. (I have ideas in mind of how to avoid such loops in practice with - as far as I can see - no real-world downsides, by sacrificing only the determinism aspect of unrestricted domain when such a loop is encountered, but that would be a separate discussion.)
The methods commonly known as instant-runoff voting, apparently including the one adopted in Maine and just recently struck down as a violation of the state constitution, consist basically of either "drop the candidate who received the most last-place votes" or "drop the candidate who received the fewest first-place votes", repeated until there is only one candidate left.
It's well known that single-choice first-past-the-post voting violates IIA and/or monotonicity, by way of what is known as the spoiler effect; voting for A means you don't vote for C, so C's vote total is reduced, so B has a better chance of winning than if you'd voted for C. While it is harder to construct an example for the "drop most-last-place" or "drop fewest-first-place" instant-runoff systems, such examples do exist; the linked Wikipedia articles for IIA and the monotonicity criterion have some.
As long as it is possible to increase A's chances by changing the way you rank B (or vice versa), the spoiler effect - and the perverse incentives associated with it - will always exist. This problem is smaller under IRV than under any single-choice voting method that I know of, but it still exists. The only voting system out there that I know of that does not have this problem is the Condorcet method.
If your co-workers are breaking the rules and you report on that to your boss, that's technically "blowing the whistle on" the rule-breaking - drawing attention to it so that people will realize that it's going on.
Similarly, if your boss is breaking the rules and your report on it to his boss, that's "blowing the whistle on" the rule-breaking again. This is probably technically going outside of official channels - but if your organization has established specific channels for reporting wrongdoing, and you report wrongdoing to those channels, the fact that they're official doesn't make it any less whistleblowing.
The problem comes in when the people you're required to report to, under the official channels, either are or are under the control/command of the people engaged in or authorizing the wrongdoing. At that point, the official channels are somewhere in the range from worthless to actually dangerous.
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Why do people believe that AES is secure?
"Security by obscurity is not very good security at all, it might stop pimple faced kids in mommies basement but it will not stop knowledgeable and motivated personnel." (from another post in this article)
Obscurity being "a thing that is unclear or difficult to understand".
No - in this context, "obscurity" means "being little-known". I.e., if your security relies on not many people knowing about you, you're not really very secure.
It's the difference between "everyone knows there's a combination lock here, but not many people know the combination, and it's hard to figure out and "the combination to this lock is easy to figure out, but not very many people know that this combination lock exists in the first place". The latter is "security by obscurity"; the former is not.
In simple analogy, an encryption algorithm is like a lock, and an encryption key is like the combination to that lock. Keeping the combination secret is not security by obscurity; keeping the algorithm secret is.
Both can increase security, technically (just as having a hidden combination lock with a hard-to-figure-out combination is technically more secure than a non-hidden lock with the same combination) - but keeping the algorithm secret is short-term security at best (just as the hidden combination lock will eventually be discovered), and because of all the ways a privately-devised encryption algorithm could have unknown weaknesses, is more likely to reduce net security (vs. using a known and well-studied one) than increase it.
You can think of every component of your encryption machine being an attack surface. The more you expose, the more opportunity you give the attacker.
That depends on what you mean by "expose".
If you mean "put in a place which is accessible to be attacked", then sure; that's true of any software. However, if there's a hole somewhere else in the software, you may unexpectedly find that an interface which you thought was internal-only may suddenly be reachable by an external attacker - and is therefore exposed, for this purpose.
If you mean "make known to the attacker", then no - because you cannot guarantee that the attacker will never know a given detail; even in the absolute best-case scenario, much less a real-world plausible scenario, binary disassembly and decompilation are things which exist.
Re: Re: Re: Re: Re: Re: Re: Re: Why do people believe that AES is secure?
My point was that "open source" encryption systems (especially widely used ones) can be broken once, and then there is an automated way to uncover the messages of everyone who uses them. Open Source AES, for example, one break, and everyone is compromised. Closed source encryption systems, especially UNIQUE closed source systems (closed to the attacker, not the user) do not suffer this vulnerability.
Eh? That doesn't make sense.
Whether the system is open or closed, once a way to break in through it has been found, anyone using the now-broken system is vulnerable.
Assuming the fact of the vulnerability isn't disclosed somehow, the odds of its being found by the people who have ability and access to fix it would presumably correspond roughly to the number of such people who exist - which probably would mean that the edge would go to the open system.
Once the vulnerability is known, the odds of a fix actually being created depend on how many people with the ability and access to fix it actually care to do so. There are different factors affecting that in open and closed contexts, so this one could be argued case-by-case, and may be a wash.
But once a fix has been created, it has to be gotten out to the users.
With open software, the users can (generally speaking) get the fix for free, the same way they (generally speaking) got the original software. That means there's little obstacle to their getting it.
With closed software, the users may very well need to pay to get the fix - especially if "being paid" is one of the reasons the providers of the closed software bothered to create a fix in the first place. That means there is an obstacle in the way, which makes users less likely to actually get the fixed version.
Even if the providers of the closed software make the fix available for free to anyone who already has the unfixed software (including people who pirated it?), there may still be other obstacles; consider the number of people who turn off Windows Update because they don't trust Microsoft not to break things they like, much less the number of organizations which turn it off because they know updating will break things. The same consideration does apply with open software to some extent, but IMO less so, since in the worst case the users can still avoid any undesired changes by forking.
Imagine, for example, an automated encryption service that produces a private encryption system. Pay some small fee, and bango, you get a UNIQUE encryption JUST FOR YOU. This is not hard, and can easily be layered ON TOP OF (not INSTEAD OF) an existing system, like AES. So, you get all the benefits of the public review, but none of the weakness of a system used by ANYONE else.
This does not necessarily hold. Although I do not fully understand the details or recall my source for this just offhand, I am given to understand that in some cases, adding additional mathematical manipulation to the math which constitutes a given form of encryption can actually make it easier to reverse the process and extract the original cleartext from the ciphertext.
(Using the same data twice in the process is one thing which can have this result; for example, while using the cleartext itself as the seed for your RNG to produce an encryption key might seem like a good idea, it means that the number which the cleartext represents has been used twice in producing the ciphertext, and that in turn may make the net mathematical transformation less complex.)
It can be argued to be trolling because bringing up the disagreement over the philosophy behind the copyleft licenses and the one behind the permissive licenses serves no purpose except to stir up an argument, and that's very close to the base definition of trolling (as I've put it in a few places and quoted it in this very comments section, "posting with the intent to cause a furor)".
You may very well be 100% sincere in your preference for non-copyleft do-as-you-will open-source licenses - but the sincerity of a post does not neutralize its potential for trollishness, and your sincerity doesn't change what the effect of bringing that point up when that "vs." is not already under discussion is.
No, but if a judge orders you to do it, and you refuse, you go to jail for contempt of court - and you stay there until you are no longer in contempt, i.e., until you comply with the order. If you never do that? Life in prison, without a conviction and even potentially without charges.
That's arguable. Some kinds of subtle trolling are actually worse than the obvious sorts; if the troll can convince or otherwise maneuver people into having the disruptive-of-meaningful-discourse argument on their own, rather than the troll having to hold up (at least one side of) the argument entirely on his/her/etc. own efforts, that actually does more damage while at the same time being easier and more satisfying for the original troll.
A: "Obamacare", in addition to not actually originally being a Democratic idea (and not being the left's preferred approach in any case), isn't actually that bad - or at least wouldn't be if it were being properly supported and tweaked at the federal level, rather than being undermined and having any attempts at tweaking it in ways which would make it work better blocked by people who want it to fail.
B: "Obamacare for the internet" isn't even a remotely close comparison. Obamacare is a sizable bureaucratic establishment, with lots of details, moving parts, and funding or other budgetary requirements, which directly touches *everyone* due to its individual mandate; rules requiring that the network be neutral are (or can be) relatively simple and straightforward, with zero bureaucracy or even funding required unless the few people who are *directly* affected by them (all of whom work for ISPs) try to flout the rules.
(C: The use of "Democrat" as an adjective, in contexts where it isn't short for "member of the Democratic Party", is a red-flag indication that the speaker has a distinct right-wing bias.)
On the post: Court Says Password Protection Doesn't Restore An Abandoned Phone's Privacy Expectations
Re: Re:
That doesn't seem to have been raised in court, however, and I'm not sure there aren't valid reasons for the court to reject that line of reasoning in any case.
On the post: Consumers Who Had Their Identities Stolen By A Spam Bot Demand FCC Investigate Bogus Net Neutrality Comments
Re: Was a comment deleted?
Hypothesis: Something is going through the hidden-because-flagged posts looking for ones which are actually blatant spam, and deleting what is found.
On the post: Netflix Admits It Doesn't Really Care About Net Neutrality Now That It's Big
Re: Re: I win again...
I.e., that giving power to government is dangerous, and that you shouldn't give government authority on the basis of "I trust that you won't abuse this".
That principle is true enough and fair enough - it's just that he takes it to an unreasonable, unbalanced degree of absolutist extreme.
On the post: Consumers Who Had Their Identities Stolen By A Spam Bot Demand FCC Investigate Bogus Net Neutrality Comments
Was a comment deleted?
This is weird. I just reloaded this page to check whether there had been new comments, and the listed "number of comments" immediately below the article went down by one - from 38 to 37.
I've never seen that happen on Techdirt before.
On the post: Former FCC Commissioner Uses Manchester Bombing As A Prop To Claim Net Neutrality Aids Terrorism
Re: Re: Sigh.
Personally, I think the spacing from the non-markdown approach is preferable, but even if you disagree with that it's still unavoidable that the difference does exist.
On the post: More Legislators Jump On The 'Blue Lives Matter' Bandwagon
Re: Re: Re: Re:
On the post: DOJ Officials Express An Interest In Prosecuting Leakers And Whistleblowers
Re: Legal Route
The proper way to do whistleblowing is to inform the people who have authority over the people who are committing and/or authorizing the wrongdoing, without letting those latter people know you're doing it.
If you can do that by going through channels, then going through channels is the right thing to do.
Otherwise, any method that gets the message through is potentially acceptable.
In this case, the person who appears to be committing and/or authorizing the wrongdoing is the President of the United States of America, to whom everyone else in the executive branch of the government answers; the only people who have authority over him are the American people themselves, i.e., the public.
As such, the only way to blow the whistle on wrondoing in the White House is to report it to the public - and the most effective way to do that is to go through the news media.
(There's an argument to be made about reporting it to Congress instead, but given how many people in Congress support the President, that would arguably be tantamount to reporting it to some of the people who are authorizing or approving of the wrongdoing.)
On the post: Brazilian Journalist Detained By UK Border Police For Reading A Book About ISIS
Re: Re: Re: Still one step ahead...
Close, but not quite.
To have "wrong" thoughts is indeed called "thought crime", but it is not "pre-crime" or "thinking of things associated with crime".
"Thought crime" is the idea that having certain thoughts is itself a crime. It's not "you're thinking about murder, and murder is a crime, so we're going to arrest you"; it's "you're thinking that the Supreme Leader might not be perfect, and thinking that the Supreme Leader is not perfect is a crime, so we're going to arrest you".
On the post: Someone Under Federal Indictment Impersonates A Journalist To File Bogus DMCA Notice
Re: Re: Re: Re: Re:
On the post: Senate Should Either Fix Or Get Off The Pot On Copyright Office Bill
Re: Re: "the bill serves no purpose, and Congress shouldn't waste its time on it" -- Pffft! Best we can hope for is waste their time!
On the post: FCC Guards 'Manhandle' Reporter Just For Asking Questions At Net Neutrality Vote
Re: Re: Re: Re: Re: Re: Re:
Be careful about that, though; while the best ranked-preference systems out there (basically, the Condorcet method) don't have this problem, the ones most popularly known under the "instant runoff" moniker do not entirely eliminate the perverse-incentive effect which leads to strategic voting.
The problem is one of what is called Arrow's impossibility theorem, and in particular the monotonicity criterion and independence of irrelevant alternatives (IIA).
The details are in the articles at those links, but basically, it is impossible to design a voting system which does not fail at least one of a set of specified criteria. Every voting system I've seen described except for the Condorcet method avoids this problem by sacrificing the criterion known as monotonicity, i.e., that voting for A over B will never increase the odds of B winning; as I understand matters, the Condorcet method instead sacrifices the criterion known as unrestricted domain, in that it can sometimes result in a cyclic loop - in which A defeats B and B defeats C and C defeats A, so that there is no definable winner. (I have ideas in mind of how to avoid such loops in practice with - as far as I can see - no real-world downsides, by sacrificing only the determinism aspect of unrestricted domain when such a loop is encountered, but that would be a separate discussion.)
The methods commonly known as instant-runoff voting, apparently including the one adopted in Maine and just recently struck down as a violation of the state constitution, consist basically of either "drop the candidate who received the most last-place votes" or "drop the candidate who received the fewest first-place votes", repeated until there is only one candidate left.
It's well known that single-choice first-past-the-post voting violates IIA and/or monotonicity, by way of what is known as the spoiler effect; voting for A means you don't vote for C, so C's vote total is reduced, so B has a better chance of winning than if you'd voted for C. While it is harder to construct an example for the "drop most-last-place" or "drop fewest-first-place" instant-runoff systems, such examples do exist; the linked Wikipedia articles for IIA and the monotonicity criterion have some.
As long as it is possible to increase A's chances by changing the way you rank B (or vice versa), the spoiler effect - and the perverse incentives associated with it - will always exist. This problem is smaller under IRV than under any single-choice voting method that I know of, but it still exists. The only voting system out there that I know of that does not have this problem is the Condorcet method.
On the post: FBI Insider Threat Program Documents Show How Little It Takes To Be Branded A Threat To The Agency
Re: Official channels?
Well, that's not strictly true.
If your co-workers are breaking the rules and you report on that to your boss, that's technically "blowing the whistle on" the rule-breaking - drawing attention to it so that people will realize that it's going on.
Similarly, if your boss is breaking the rules and your report on it to his boss, that's "blowing the whistle on" the rule-breaking again. This is probably technically going outside of official channels - but if your organization has established specific channels for reporting wrongdoing, and you report wrongdoing to those channels, the fact that they're official doesn't make it any less whistleblowing.
The problem comes in when the people you're required to report to, under the official channels, either are or are under the control/command of the people engaged in or authorizing the wrongdoing. At that point, the official channels are somewhere in the range from worthless to actually dangerous.
On the post: Senate Given The Go-Ahead To Use Encrypted Messaging App Signal
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Why do people believe that AES is secure?
No - in this context, "obscurity" means "being little-known". I.e., if your security relies on not many people knowing about you, you're not really very secure.
It's the difference between "everyone knows there's a combination lock here, but not many people know the combination, and it's hard to figure out and "the combination to this lock is easy to figure out, but not very many people know that this combination lock exists in the first place". The latter is "security by obscurity"; the former is not.
In simple analogy, an encryption algorithm is like a lock, and an encryption key is like the combination to that lock. Keeping the combination secret is not security by obscurity; keeping the algorithm secret is.
Both can increase security, technically (just as having a hidden combination lock with a hard-to-figure-out combination is technically more secure than a non-hidden lock with the same combination) - but keeping the algorithm secret is short-term security at best (just as the hidden combination lock will eventually be discovered), and because of all the ways a privately-devised encryption algorithm could have unknown weaknesses, is more likely to reduce net security (vs. using a known and well-studied one) than increase it.
That depends on what you mean by "expose".
If you mean "put in a place which is accessible to be attacked", then sure; that's true of any software. However, if there's a hole somewhere else in the software, you may unexpectedly find that an interface which you thought was internal-only may suddenly be reachable by an external attacker - and is therefore exposed, for this purpose.
If you mean "make known to the attacker", then no - because you cannot guarantee that the attacker will never know a given detail; even in the absolute best-case scenario, much less a real-world plausible scenario, binary disassembly and decompilation are things which exist.
On the post: Senate Given The Go-Ahead To Use Encrypted Messaging App Signal
Re: Re: Re: Re: Re: Re: Re: Re: Why do people believe that AES is secure?
Eh? That doesn't make sense.
Whether the system is open or closed, once a way to break in through it has been found, anyone using the now-broken system is vulnerable.
Assuming the fact of the vulnerability isn't disclosed somehow, the odds of its being found by the people who have ability and access to fix it would presumably correspond roughly to the number of such people who exist - which probably would mean that the edge would go to the open system.
Once the vulnerability is known, the odds of a fix actually being created depend on how many people with the ability and access to fix it actually care to do so. There are different factors affecting that in open and closed contexts, so this one could be argued case-by-case, and may be a wash.
But once a fix has been created, it has to be gotten out to the users.
With open software, the users can (generally speaking) get the fix for free, the same way they (generally speaking) got the original software. That means there's little obstacle to their getting it.
With closed software, the users may very well need to pay to get the fix - especially if "being paid" is one of the reasons the providers of the closed software bothered to create a fix in the first place. That means there is an obstacle in the way, which makes users less likely to actually get the fixed version.
Even if the providers of the closed software make the fix available for free to anyone who already has the unfixed software (including people who pirated it?), there may still be other obstacles; consider the number of people who turn off Windows Update because they don't trust Microsoft not to break things they like, much less the number of organizations which turn it off because they know updating will break things. The same consideration does apply with open software to some extent, but IMO less so, since in the worst case the users can still avoid any undesired changes by forking.
This does not necessarily hold. Although I do not fully understand the details or recall my source for this just offhand, I am given to understand that in some cases, adding additional mathematical manipulation to the math which constitutes a given form of encryption can actually make it easier to reverse the process and extract the original cleartext from the ciphertext.
(Using the same data twice in the process is one thing which can have this result; for example, while using the cleartext itself as the seed for your RNG to produce an encryption key might seem like a good idea, it means that the number which the cleartext represents has been used twice in producing the ciphertext, and that in turn may make the net mathematical transformation less complex.)
On the post: US Court Upholds Enforceability Of GNU GPL As Both A License And A Contract
Re: Re: Re: Re: Re: Meh, GPL.
You may very well be 100% sincere in your preference for non-copyleft do-as-you-will open-source licenses - but the sincerity of a post does not neutralize its potential for trollishness, and your sincerity doesn't change what the effect of bringing that point up when that "vs." is not already under discussion is.
On the post: British Human Rights Activist Faces Prison For Refusing To Hand Over Passwords At UK Border
Re: Re:
On the post: BBC Says It May Contact Your Boss If You Post Comments It Finds Problematic
Re: Re: it just takes a bit of subtlety
That's arguable. Some kinds of subtle trolling are actually worse than the obvious sorts; if the troll can convince or otherwise maneuver people into having the disruptive-of-meaningful-discourse argument on their own, rather than the troll having to hold up (at least one side of) the argument entirely on his/her/etc. own efforts, that actually does more damage while at the same time being easier and more satisfying for the original troll.
On the post: It's Time For The FCC To Actually Listen: The Vast Majority Of FCC Commenters Support Net Neutrality
Re: Re: Not 'if', merely 'when' and 'how'
B: "Obamacare for the internet" isn't even a remotely close comparison. Obamacare is a sizable bureaucratic establishment, with lots of details, moving parts, and funding or other budgetary requirements, which directly touches *everyone* due to its individual mandate; rules requiring that the network be neutral are (or can be) relatively simple and straightforward, with zero bureaucracy or even funding required unless the few people who are *directly* affected by them (all of whom work for ISPs) try to flout the rules.
(C: The use of "Democrat" as an adjective, in contexts where it isn't short for "member of the Democratic Party", is a red-flag indication that the speaker has a distinct right-wing bias.)
On the post: FCC Ignores The Will Of The Public, Votes To Begin Dismantling Net Neutrality
Re: Re:
One, it's a flub, and what is meant was "consecutive".
Two, it's referring to the multiple such entities having had a record-breaking year in the same year.
Not sure which is more likely to have been meant.
On the post: FCC Ignores The Will Of The Public, Votes To Begin Dismantling Net Neutrality
Re: Re: Re: Re: Re: Re: Re: FCC Ignores The Will Of The Public,
(...but, is there a hurricane tonight?)
Next >>