Add to this that minorities in general are weak as communities. Some of them might be individually strong, but by the numbers and lack of rights, they are vulnerable, historically oppressed and deserve added protection.
Cops in general are respected members of society (when they don't bring the hate on themselves through their own actions), with several layers of physical and legal protections. (Badges and authority, guns, "aggravated circumstances" when assaulted, etc.)
Adding them to the list of "protected categories" is just allowing bullies to tag themselves with the label of "victim" because some people are complaining of the bullying. (Not that all cops are bullies - hopefully - but those who are will definitely abuse this denomination.)
“Everyone agrees that it should be a hate crime to shoot a police officer,” said state Sen. Cam Ward, (...) The question is, ‘What gets tacked on?’ Yes, you can find a bipartisan solution.”
I think he's directly lying here. He doesn't want to add cops to "protected classes" just in case a cop gets shot. Most people would indeed agree that shooting a cop is a criminal act that should prosecuted with the full force of the law. But, as stated in the article, you already risk the death penalty for that so that's overkill (literally).
However, "hate crimes" also include several restrictions on what is normally speech protected by first amendment. I suspect that is his actual goal (or that of whoever "suggested" this bill to him) is to add a chilling effect on speech criticizing cops. "Insulting" a cop could then be considered a hate crime, adding more charges to someone angry at being arrested. (Which often already include "resisting arrest", even when the cops had to legal reason to arrest in the first place.)
Re: Re: Re: Contract law does not trump the first amendment
He indeed wasn't a NSA employee, but he was still a NSA contractor who had to sign this pretty broad NDA. He did this voluntarily, and he breached the contract. This much is not in dispute.
You're right that the problem is that the US doesn't want to let him argue his reasons and motives, so any trial would exclude a proper defense as should be guaranteed under the "due process" clause of the Constitution. The fact that he wanted to expose illegal behavior (or at the very least suspicious activity) is not a valid defense under espionage act (with a broad definition of "espionage" too) and other similar "National Security" laws. I don't know how this would turn out should this defense be allowed, but as things are now, he doesn't stand a chance.
This lawsuit is not as critical as the actual espionage charges that are pressed on him, but it was as one-sided for the same reason: the facts are not arguable, and the motives are completely discarded. Not even presented and rejected, but outright disregarded from the start. Still doesn't change that the contract with the NSA was voluntarily signed, then broken from his side.
... wondering why exactly they had a picture someone in nazi uniform in a prominent place in the hotel.
My thoughts exactly. Had that "grandpa" not been actually a member of the nazi party, that didn't change the fact that they proudly displayed their grandpa posing in a nazi uniform. There had to be something in better taste. Like a picture of him posing in casual clothes. Or the picture of a local sightseeing spot. Or of a flower vase. Anything other than a man in nazi uniform.
Or they are really proud of him serving in the nazi army, even assuming he was not a card-carrying nazi. In which case they should not be surprised when people are assuming he was a nazi (which, as it turns out, he was) and drawing conclusions on their choice of portray on display. If they are proud of this portray, let them assume their choice with pride. (Not sure what good that would do to their business, but "moral" principles come first I assume?)
The only way to come out sane from watching this is checking the issue rather than the actors. In this case, you might have reasons to hate the ISP, but they are hands down the one you should side with.
If this is considered willful copyright infringement, Internet overall will be doomed, in the US at least. The only option left will be to completely drop any effort at moderation because any less-than-perfect implementation will be interpreted as active support for the infringements, which is crazy.
In any other domain, it would be seen as such. Imagine someone commits a murder in a building despite a gun check at the entrance, nobody would think of suing the security guards for willfully murdering the victim. Failing to find the gun is not the same as being an accomplice. (There would likely be an investigation to make sure none of the guards was indeed an accomplice, but none of them would be sued for just failing at finding the gun.)
Obviously, circumstances are a bit different here, but the core concept is the same: failure to prevent a behavior is not the same as willfully contributing to it. Judging otherwise is a terrible idea, regardless of who the victim of the decision is.
If it's just about "finding a liar", I agree with you that it could likely be fixed to have better results than a coin toss. It's probably not just "reversing the current conclusion", but multiple "hints" are definitely read as the opposite of what they more often mean.
However, the probably can go lower than 50/50 given that this so-called "scientific method" tried to guess much more than just "who's lying". The examples given include finding childhood trauma from Comey's memoir. So, save for Comey actually writing "I was not beaten as a child" and SCAN finding that to be a lie, this is a wild guess with a probability lower than 50%.
Actually, I'll side with the previous AC on this.
Godwin's law is more about comparing someone to nazis or Hitler in order to discredit his arguments. (i.e. if he told someone "wow, you just sounded like a nazi", that would be worth a Godwin point.)
This is about making a comparison to WWII to explain why Austria might feel uncomfortable with the idea of secret surveillance. And I think he might be right. WWII has left several countries pretty much traumatized for decades. I wouldn't be surprised that this move by the Austrian police triggered a trauma from Nazi surveillance. It being justified or not is a different question.
It's actually significantly worse than a coin toss.
It takes indicators of truth and label them as indicators of lie, and vice-versa.
It is already well-known that human memory is unreliable, with gaps and a severe lack of chronological ordering. Someone lying generally has a more "complete" story ready to be told than a honest witness or suspect.
That's not to say that lies don't have flaws, particularly when scrutinized, but it's harder to reveal in a single written deposition.
Also... "I didn't do it" is the best sign of innocence?
Really?
Trump must be the most innocent person on Earth, then. And this scan guy is likely the single best detective on the planet. (/s, for those who missed it.)
But nowadays, you don't need to travel in space and time to "Soviet Russia" when you can turn some of these jokes so terrifyingly real back home in the US.
"In America, you watch Big Brother."
"In Soviet Russia, Big Brother watch you!"
"In America, you call the police."
"In Soviet Russia, the police calls you."
Consent stored before choice
No way to opt out
Pre-selected choices
Non-respect of choice
I've seen a ton of sites guilty of point 2. You get a nice banner telling you "we use cookies", and that's all. Definitely no opt-in, and not even an opt-out.
I don't necessarily mind point 3 as long as it's clear: if the law requires an opt-out, you can pre-select consent. You cannot, however, start acting as if the user consents until the selection is submitted. That's point 1, and it's making the opt-out basically irrelevant since at least some data has already been collected and communicated by the time the user is done making a choice.
Point 4 is obviously the worst: you have an illusion of privacy that is not actually enforced. That's not only circumventing consent, which points 1 and 2 are guilty of, but also adding an outright lie on top of it.
The OS doesn't manage the cookies. The browser does. Though when using Windows, there is a chance you use the OS-provided browser, but you can decide on the browser independently from the OS. In some case, you can add plug-ins or set options to block cookies altogether.
Your data is being tracked, not that of the "owner" of the OS. The privacy laws don't care whose OS it is, it's the user private data that is in question. You can sue anyone who keeps your data, regardless of who owns the computer, the OS or the browser.
You're welcome to try again once you've informed yourself on the subject.
Re: Re: Re: Re: Well that's one way to deal with bullies/trolls.
Two problems in your response.
It wouldn't be bullying just by itself. It's more of a "adding insult to injury" type of conduct. But it's definitely a wrong move because you tell them "we don't want you here". Even if that's not the intent of the service provider, they are participating in the bullying by actually enforcing the very message the bullies were already sending.
You're equivocating moderating victims of bullying because of who they are (they didn't violate a T&C or law, they only are targets of such violations) and normal moderation efforts of unwanted content. In the former case, you remove people because others might have a "bad" behavior targeting them. In the latter, you target people publishing "bad" content. (Whatever the definition of "bad" is doesn't really matter.) Those are fundamentally different.
I only agree in that there was likely no actual malice in this move. It's just a way to "remove bullying" in a seemingly efficient way. Except it's not because you keep the actual bad elements in, and send the wrong message to both sides, namely that "bullies are free to target another group of people... that you would then have to also ban".
As a note, I don't disagree with the idea that FB made things unclear on purpose and are probably abusing the data they receive from users.
I just wholeheartedly disagree with Hungary on suing FB for the word "free". That seems to me like just a twisted means to a possibly justified end. They wanted to have them guilty of something so they latched on to this one word.
In my opinion, that is still not the right definition here.
The reason FB users provide data is not to "pay" Facebook, but because it's data they want to share... though they might not understand clearly how broadly this data will be shared.
FB uses the data you provide for your own reasons in ways that let them get paid in other ways. Banks mostly work the same way with your money.
As I see it, it's more of a "we have common interest in sharing your data" than "we provide you with a service to share your data in exchange for... your data".
You mention that governments can even tax bartering, but there is nothing to tax here between the customer and the service provider. There are taxable amounts, but that is between FB and its business customers. It's not "gratis" by your definition because there is a profit motive. It just happens not to come from the users.
These lawyers try to equivocate "songs that feel the same" and "songs that make you feel the same".
Neither of these interpretations is subject to copyright anyway. Except if you manage to confuse jurors enough.
My trust in the justice system is pretty limited in some cases.
Complex technical or legal points are definitely not something you want random people to judge. Even if you give a crash course on the law involved in the trial, they will never understand it all. And that doesn't even begin to address technical subjects.
Ah, I do agree that we should investigate and prevent the means that were used to influence the election. In that regard, this discussion is certainly still healthy.
Here's a quick summary of my position:
There are two discussions, the short-term one regarding Trump specifically and a long-term one regarding election security in general.
I do think the discussion is pointless at this point regarding the current impeachment proceedings and in terms of debating one candidate against another in the next election. The impeachment has way more provable points to rest on, and election debates should work on two points: program and credibility. Flinging mud at each other without proof is a bad tactic which, as I mentioned above, seems to help one side more than the other.
For a longer term view on election security and foreign policy, this is definitely a conversation that needs to happen. I was wrong to give the idea that we should drop it because we can't prove that Trump actively chose to collude, whatever the legal definition of this would be.
As for the point of this study, I find that there is none. Russia definitely tried to meddle, and this very likely had some influence. But trying to weigh this influence is pointless by now. Seeing how it happened and preventing it is more important than playing a blame game. We didn't learn anything new in either what happened or how to stop it, I'll move on.
But until it's all settled, odds are passwords are better than fingerprints if your main concern is unwanted access by government employees.
Not only government.
If a criminal wants access to your fingerprint-locked phone, he can just knock you out and press a couple of fingers on your phone. It's simpler and faster than forcing you to tell him your password. There is still a margin of error as you have more fingers than the number of allowed attempts, but the chance of success is infinitely higher than guessing a password.
Moreover, you always have the problem that you can't change your fingerprint if they are compromised. Like if you touch anything, anywhere.
I'm surprised fingerprints have ever been considered "security" at all. It's ok as an additional layer to a password, but it's definitely not great as a single layer of security. It's convenient when you don't really care about security, but it stops there.
Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red tho
Good enough analogy.
Can be improved with a reminder that those marbles are not just straight "blue" and "red", but shades of blue, red... and purple. Not to mention how some of them will appear more blue or red depending on the lighting.
There has been an experiment described in an article here.
It was about how a few "posts" were given to a team of "moderators" to decide if they should be moderated or not. I don't remember anything about a time limit, but there were only a few samples of content, so "scale" was not a factor.
It still ended with different results for each sample. It was impossible to have the whole team make unanimous decisions about approving or rejecting the content.
So, If I give you the task of rejecting all blue marbles and keep all red marbles, what are you going to do about this single marble I give you to sort, which happens to be purple?
Perfectly moderating at scale is indeed impossible because... of the scale.
But it's already impossible because of subjectivity.
It shouldn't stop sites from improving their moderation process, but the public should stop asking for the impossible. Notify the site when an error is made, and hope they have the resource to investigate. On the other hand, if they don't have the resources, they should not promise that they will. They should definitely be clear about their limits so that the public can be set their expectations accordingly.
On the post: Alabama Lawmakers Think The Time Is Right To Make Assaulting A Cop A 'Hate Crime'
Re: Cops aint minorities
Add to this that minorities in general are weak as communities. Some of them might be individually strong, but by the numbers and lack of rights, they are vulnerable, historically oppressed and deserve added protection.
Cops in general are respected members of society (when they don't bring the hate on themselves through their own actions), with several layers of physical and legal protections. (Badges and authority, guns, "aggravated circumstances" when assaulted, etc.)
Adding them to the list of "protected categories" is just allowing bullies to tag themselves with the label of "victim" because some people are complaining of the bullying. (Not that all cops are bullies - hopefully - but those who are will definitely abuse this denomination.)
On the post: Alabama Lawmakers Think The Time Is Right To Make Assaulting A Cop A 'Hate Crime'
Outright lying
I think he's directly lying here. He doesn't want to add cops to "protected classes" just in case a cop gets shot. Most people would indeed agree that shooting a cop is a criminal act that should prosecuted with the full force of the law. But, as stated in the article, you already risk the death penalty for that so that's overkill (literally).
However, "hate crimes" also include several restrictions on what is normally speech protected by first amendment. I suspect that is his actual goal (or that of whoever "suggested" this bill to him) is to add a chilling effect on speech criticizing cops. "Insulting" a cop could then be considered a hate crime, adding more charges to someone angry at being arrested. (Which often already include "resisting arrest", even when the cops had to legal reason to arrest in the first place.)
On the post: No Surprise: Judge Says US Government Can Take The Proceeds From Snowden's Book
Re: Re: Re: Contract law does not trump the first amendment
He indeed wasn't a NSA employee, but he was still a NSA contractor who had to sign this pretty broad NDA. He did this voluntarily, and he breached the contract. This much is not in dispute.
You're right that the problem is that the US doesn't want to let him argue his reasons and motives, so any trial would exclude a proper defense as should be guaranteed under the "due process" clause of the Constitution. The fact that he wanted to expose illegal behavior (or at the very least suspicious activity) is not a valid defense under espionage act (with a broad definition of "espionage" too) and other similar "National Security" laws. I don't know how this would turn out should this defense be allowed, but as things are now, he doesn't stand a chance.
This lawsuit is not as critical as the actual espionage charges that are pressed on him, but it was as one-sided for the same reason: the facts are not arguable, and the motives are completely discarded. Not even presented and rejected, but outright disregarded from the start. Still doesn't change that the contract with the NSA was voluntarily signed, then broken from his side.
On the post: Austrian Hotel Drops Libel Lawsuit Against Guest Who Complained About Pictures Of Nazis In The Lobby
Re: Well that was awkward
My thoughts exactly. Had that "grandpa" not been actually a member of the nazi party, that didn't change the fact that they proudly displayed their grandpa posing in a nazi uniform. There had to be something in better taste. Like a picture of him posing in casual clothes. Or the picture of a local sightseeing spot. Or of a flower vase. Anything other than a man in nazi uniform.
Or they are really proud of him serving in the nazi army, even assuming he was not a card-carrying nazi. In which case they should not be surprised when people are assuming he was a nazi (which, as it turns out, he was) and drawing conclusions on their choice of portray on display. If they are proud of this portray, let them assume their choice with pride. (Not sure what good that would do to their business, but "moral" principles come first I assume?)
On the post: Insanity (AKA Copyright Statutory Damages) Rules: Cox Hit With $1 Billion (With A B) Jury Verdict For Failing To Magically Stop Piracy
Re: The enemy of my enemy.
The only way to come out sane from watching this is checking the issue rather than the actors. In this case, you might have reasons to hate the ISP, but they are hands down the one you should side with.
If this is considered willful copyright infringement, Internet overall will be doomed, in the US at least. The only option left will be to completely drop any effort at moderation because any less-than-perfect implementation will be interpreted as active support for the infringements, which is crazy.
In any other domain, it would be seen as such. Imagine someone commits a murder in a building despite a gun check at the entrance, nobody would think of suing the security guards for willfully murdering the victim. Failing to find the gun is not the same as being an accomplice. (There would likely be an investigation to make sure none of the guards was indeed an accomplice, but none of them would be sued for just failing at finding the gun.)
Obviously, circumstances are a bit different here, but the core concept is the same: failure to prevent a behavior is not the same as willfully contributing to it. Judging otherwise is a terrible idea, regardless of who the victim of the decision is.
On the post: Another Law Enforcement Investigation Tool Found To Be A Junk Science Coin Toss
Re: Re: worse
If it's just about "finding a liar", I agree with you that it could likely be fixed to have better results than a coin toss. It's probably not just "reversing the current conclusion", but multiple "hints" are definitely read as the opposite of what they more often mean.
However, the probably can go lower than 50/50 given that this so-called "scientific method" tried to guess much more than just "who's lying". The examples given include finding childhood trauma from Comey's memoir. So, save for Comey actually writing "I was not beaten as a child" and SCAN finding that to be a lie, this is a wild guess with a probability lower than 50%.
On the post: Austria's Top Court Says Police May Not Install Surveillance Malware On Computers And Phones, Nor Collect Vehicle And Driver Information Covertly
Re: Re:
Actually, I'll side with the previous AC on this.
Godwin's law is more about comparing someone to nazis or Hitler in order to discredit his arguments. (i.e. if he told someone "wow, you just sounded like a nazi", that would be worth a Godwin point.)
This is about making a comparison to WWII to explain why Austria might feel uncomfortable with the idea of secret surveillance. And I think he might be right. WWII has left several countries pretty much traumatized for decades. I wouldn't be surprised that this move by the Austrian police triggered a trauma from Nazi surveillance. It being justified or not is a different question.
On the post: Another Law Enforcement Investigation Tool Found To Be A Junk Science Coin Toss
worse
It's actually significantly worse than a coin toss.
It takes indicators of truth and label them as indicators of lie, and vice-versa.
It is already well-known that human memory is unreliable, with gaps and a severe lack of chronological ordering. Someone lying generally has a more "complete" story ready to be told than a honest witness or suspect.
That's not to say that lies don't have flaws, particularly when scrutinized, but it's harder to reveal in a single written deposition.
Also... "I didn't do it" is the best sign of innocence?
Really?
Trump must be the most innocent person on Earth, then. And this scan guy is likely the single best detective on the planet. (/s, for those who missed it.)
On the post: Tennessee Deputy Who Baptised An Arrestee And Strip Searched A Minor Now Dealing With 44 Criminal Charges And Five Lawsuits
I remember a few jokes about "In Soviet Russia".
But nowadays, you don't need to travel in space and time to "Soviet Russia" when you can turn some of these jokes so terrifyingly real back home in the US.
On the post: Guess What? Many Cookie Banners Ignore Your Wishes, So Max Schrems Goes On The GDPR Attack Again
I've seen a ton of sites guilty of point 2. You get a nice banner telling you "we use cookies", and that's all. Definitely no opt-in, and not even an opt-out.
I don't necessarily mind point 3 as long as it's clear: if the law requires an opt-out, you can pre-select consent. You cannot, however, start acting as if the user consents until the selection is submitted. That's point 1, and it's making the opt-out basically irrelevant since at least some data has already been collected and communicated by the time the user is done making a choice.
Point 4 is obviously the worst: you have an illusion of privacy that is not actually enforced. That's not only circumventing consent, which points 1 and 2 are guilty of, but also adding an outright lie on top of it.
On the post: Guess What? Many Cookie Banners Ignore Your Wishes, So Max Schrems Goes On The GDPR Attack Again
Re:
You're wrong on so many points.
You're welcome to try again once you've informed yourself on the subject.
On the post: Be Careful What You Wish For: TikTok Tries To Stop Bullying On Its Platforms... By Suppressing Those It Thought Might Get Bullied
Re: Re: Re: Re: Well that's one way to deal with bullies/trolls.
Two problems in your response.
It wouldn't be bullying just by itself. It's more of a "adding insult to injury" type of conduct. But it's definitely a wrong move because you tell them "we don't want you here". Even if that's not the intent of the service provider, they are participating in the bullying by actually enforcing the very message the bullies were already sending.
I only agree in that there was likely no actual malice in this move. It's just a way to "remove bullying" in a seemingly efficient way. Except it's not because you keep the actual bad elements in, and send the wrong message to both sides, namely that "bullies are free to target another group of people... that you would then have to also ban".
On the post: Hungary Has Fined Facebook For 'Misleading Consumers' Because It Promoted Its Service As 'Free'
Re: Re: Re:
As a note, I don't disagree with the idea that FB made things unclear on purpose and are probably abusing the data they receive from users.
I just wholeheartedly disagree with Hungary on suing FB for the word "free". That seems to me like just a twisted means to a possibly justified end. They wanted to have them guilty of something so they latched on to this one word.
On the post: Hungary Has Fined Facebook For 'Misleading Consumers' Because It Promoted Its Service As 'Free'
Re: Re:
In my opinion, that is still not the right definition here.
The reason FB users provide data is not to "pay" Facebook, but because it's data they want to share... though they might not understand clearly how broadly this data will be shared.
FB uses the data you provide for your own reasons in ways that let them get paid in other ways. Banks mostly work the same way with your money.
As I see it, it's more of a "we have common interest in sharing your data" than "we provide you with a service to share your data in exchange for... your data".
You mention that governments can even tax bartering, but there is nothing to tax here between the customer and the service provider. There are taxable amounts, but that is between FB and its business customers. It's not "gratis" by your definition because there is a profit motive. It just happens not to come from the users.
On the post: Marvin Gaye Family Not Done With Pharrell Just Yet: Bring Him Back To Court Claiming Perjury
Re:
My trust in the justice system is pretty limited in some cases.
Complex technical or legal points are definitely not something you want random people to judge. Even if you give a crash course on the law involved in the trial, they will never understand it all. And that doesn't even begin to address technical subjects.
On the post: Marvin Gaye Family Not Done With Pharrell Just Yet: Bring Him Back To Court Claiming Perjury
Re:
That's exactly how I read it too.
But you count on the greedy lawyers of a greedy estate to read it as something entirely different.
On the post: Robyn Openshaw, 'The Green Smoothie Girl,' Threatening SLAPP Suits Over Mediocre Reviews
(emphasis added)
Am I the only thinking that you might want to be careful with this phrasing?
On the post: Study Says Russian Trolls Didn't Have Much Influence On Election; But It's More Complicated Than That
Re: Re: Re: Re:
Ah, I do agree that we should investigate and prevent the means that were used to influence the election. In that regard, this discussion is certainly still healthy.
Here's a quick summary of my position:
There are two discussions, the short-term one regarding Trump specifically and a long-term one regarding election security in general.
I do think the discussion is pointless at this point regarding the current impeachment proceedings and in terms of debating one candidate against another in the next election. The impeachment has way more provable points to rest on, and election debates should work on two points: program and credibility. Flinging mud at each other without proof is a bad tactic which, as I mentioned above, seems to help one side more than the other.
For a longer term view on election security and foreign policy, this is definitely a conversation that needs to happen. I was wrong to give the idea that we should drop it because we can't prove that Trump actively chose to collude, whatever the legal definition of this would be.
As for the point of this study, I find that there is none. Russia definitely tried to meddle, and this very likely had some influence. But trying to weigh this influence is pointless by now. Seeing how it happened and preventing it is more important than playing a blame game. We didn't learn anything new in either what happened or how to stop it, I'll move on.
On the post: Another Federal Court Says Compelled Production Of Fingerprints To Unlock A Phone Doesn't Violate The Constitution
Security
Not only government.
If a criminal wants access to your fingerprint-locked phone, he can just knock you out and press a couple of fingers on your phone. It's simpler and faster than forcing you to tell him your password. There is still a margin of error as you have more fingers than the number of allowed attempts, but the chance of success is infinitely higher than guessing a password.
Moreover, you always have the problem that you can't change your fingerprint if they are compromised. Like if you touch anything, anywhere.
I'm surprised fingerprints have ever been considered "security" at all. It's ok as an additional layer to a password, but it's definitely not great as a single layer of security. It's convenient when you don't really care about security, but it stops there.
On the post: Be Careful What You Wish For: TikTok Tries To Stop Bullying On Its Platforms... By Suppressing Those It Thought Might Get Bullied
Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red tho
Good enough analogy.
Can be improved with a reminder that those marbles are not just straight "blue" and "red", but shades of blue, red... and purple. Not to mention how some of them will appear more blue or red depending on the lighting.
There has been an experiment described in an article here.
It was about how a few "posts" were given to a team of "moderators" to decide if they should be moderated or not. I don't remember anything about a time limit, but there were only a few samples of content, so "scale" was not a factor.
It still ended with different results for each sample. It was impossible to have the whole team make unanimous decisions about approving or rejecting the content.
So, If I give you the task of rejecting all blue marbles and keep all red marbles, what are you going to do about this single marble I give you to sort, which happens to be purple?
Perfectly moderating at scale is indeed impossible because... of the scale.
But it's already impossible because of subjectivity.
It shouldn't stop sites from improving their moderation process, but the public should stop asking for the impossible. Notify the site when an error is made, and hope they have the resource to investigate. On the other hand, if they don't have the resources, they should not promise that they will. They should definitely be clear about their limits so that the public can be set their expectations accordingly.
Next >>