Security Researcher Discovers Flaws In Yelp-For-MAGAs App, Developer Threatens To Report Him To The Deep State

from the shooting-the-messenger dept

Even a cursory look at past stories we've done about how companies treat security researchers who point out the trash-state of their products would reveal that entirely too many people and companies seem to think shooting the messenger is the best response. I have never understood the impulse to take people who are essentially stress-testing your software for free, ultimately pointing out how the product could be safer than it is, and then threatening those people with legal action or law enforcement. But, then, much of the world makes little sense to me.

Such as why a Yelp-for-MAGA people should ever be a thing. But it absolutely is a thing, with conservative news site 63red.com releasing a mobile app that is essentially a Yelp-clone, but with the twist that its chief purpose is to let other Trump supporters know how likely they are to be derided when visiting a restaurant. This is an understandable impulse, I suppose, given the nature of politics in 2019 America, though the need for an app seems like overkill. Regardless, the app was released and a security researcher found roughly all the security holes in it.

On Tuesday, a French infosec bod, going under the Mr Robot-themed pseudonym Elliot Alderson and handle fs0c131y, notified 63red that it had left hard-coded credentials in its Yelp-for-Trumpistas smartphone application, and that whoever built its backend APIs had forgotten to implement any meaningful form of authentication.

Alderson poked around inside the Android build of the app, and spotted a few of insecure practices, including the username and password of the programmer, and a lack of authentication on its backend APIs, allowing anyone to pull up user account information, and potentially slurp the app's entire user database. It's also possible to insert data into the backend log files, we're told.

In other words, what 63red meant to build was an app to let Trump supporters know where they can go to feel safe. What it actually built was an app that tried to do that, but instead exposed user information to anyone who wanted to mine for it or, say, build a list of Trump supporters for reasons that could be entirely nefarious. Not great.

Nor was the reaction from 63red, which decided that Alderson pointing out its shoddy work warranted a threat to refer him to the FBI, AKA the Deep State.

"We see this person’s illegal and failed attempts to access our database servers as a politically-motivated attacked, and will be reporting it to the FBI later today," 63red's statement reads. "We hope that, just as in the case of many other politically-motivated internet attacks, this perpetrator will be brought to justice, and we will pursue this matter, and all other attacks, failed or otherwise, to the utmost extent of the law."

63red described the privacy issues as a "minor problem," and noted that no user passwords were exposed nor any user data changed.

For his part, Alderson took the threat of an FBI referral in full stride. Far from quaking in his boots, he simply pointed out that 63red's security was so non-existent that he didn't need to commit any crimes to do what he did.

"The FBI threat is a threat, I didn’t do anything illegal," he told The Register. "I didn’t break or hack anything. Everything was open."

And now this whole story is getting far greater coverage due to the threat than it would have had 63red simply, you know, secured their app based on the freely given information provided by a white hat security researcher.

I'm sure the folks using this app couldn't feel more safe.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: donald trump, maga, reviews, security, threats
Companies: 63red


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    Thad (profile), 14 Mar 2019 @ 12:13pm

    I heard the app had been pulled from both Apple and Google's stores due to its security issues, though I can't find a good source at the moment.

    link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 14 Mar 2019 @ 12:14pm

    Isn't security researcher just a fancy name for cracker?!!

    link to this | view in chronology ]

    • icon
      Anonymous Anonymous Coward (profile), 14 Mar 2019 @ 12:17pm

      Re:

      Wouldn't that distinction be based upon intent? I know the laws doesn't make that distinction, but that is what is wrong with the law, not what is wrong with a legitimate researcher looking for the kinds of problems found in this app.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 14 Mar 2019 @ 12:34pm

        Re: Re:

        Wouldn't an invitation in the way of a challenge for reward be the only permission an app developer might want to grant people tampering with his work?

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 14 Mar 2019 @ 12:41pm

          Re: Re: Re:

          When one pays for something it is theirs to tamper with.

          link to this | view in chronology ]

        • icon
          Mason Wheeler (profile), 14 Mar 2019 @ 12:49pm

          Re: Re: Re:

          If John Q. Appdeveloper develops an app, and I download it and put it on my computer, suddenly John is no longer the only stakeholder with an interest in whether or not the app is secure. My right to know whether John's software is introducing security holes to my computer, and make informed decisions based on that knowledge, trumps John's interest in hiding the truth in order to not be embarrassed by news getting out of his shoddy software development skills.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 15 Mar 2019 @ 9:54am

            Re: Re: Re: Re:

            No security holes were reported in the app. They're all on the server... which still matters, because it could be your information that leaks, but it's basically illegal for you to check whether a server's secure. (Of course, if the app contains what looks like a server login+password, you can make an educated guess as to its security.)

            My right to know whether John's software is introducing security holes to my computer

            This is almost echoing Stallman. We can hardly say this right exists when most apps ship without source code. This developer appears to have accidentally published it.

            link to this | view in chronology ]

            • icon
              Madd the Sane (profile), 15 Mar 2019 @ 12:06pm

              Re: Re: Re: Re: Re:

              (Of course, if the app contains what looks like a server login+password, you can make an educated guess as to its security.)

              Uh…

              Alderson poked around inside the Android build of the app, and spotted a few of insecure practices, including the username and password of the programmer[…]

              Also:

              We can hardly say this right exists when most apps ship without source code. This developer appears to have accidentally published it.

              There are other ways of checking the security of an app, such as what APIs it calls.

              link to this | view in chronology ]

        • icon
          Thad (profile), 14 Mar 2019 @ 1:10pm

          Re: Re: Re:

          What tampering? It's a JavaScript app. Its source code is included. "Alderson" looked at the source code and saw it had sensitive information in cleartext.

          In what way is that tampering? If I hit Ctrl-U right now, am I "tampering" with Techdirt?

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 14 Mar 2019 @ 2:10pm

            Re: Re: Re: Re:

            Not all that long ago someone, I forget who, suggested that altering the URL in the browser "address bar" constituted hacking.

            I imagine there are many more.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 14 Mar 2019 @ 2:47pm

              Re: Re: Re: Re: Re:

              It was a politician if I remember right.

              link to this | view in chronology ]

              • identicon
                Qwertygiy, 14 Mar 2019 @ 3:48pm

                Unless my memory is grossly mistaken, it was the Nova Scotia government trying to accuse a teenager of "hacking" documents that were improperly published to their online database, by changing the ID number in the URL.

                link to this | view in chronology ]

    • icon
      Aaron Walkhouse (profile), 14 Mar 2019 @ 12:31pm

      Depends:

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 14 Mar 2019 @ 12:38pm

        Re: Depends:

        They don't want to give creedence to us modern day ]]Crackers, I guess!

        link to this | view in chronology ]

      • identicon
        Bruce C., 14 Mar 2019 @ 3:14pm

        Re: Depends:

        Those are the ones using the app, not the ones breaking it.

        link to this | view in chronology ]

    • identicon
      Anonymous Coward, 14 Mar 2019 @ 12:43pm

      Re:

      "Isn't security researcher just a fancy name for cracker?!!"

      No - not necessarily. One does not need to be proficient with a thing in order to ascertain that thing is grossly misconfigured.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 14 Mar 2019 @ 12:56pm

        Re: Re:

        Ok. That is reasonable to understand for the purpose of one's own security one would want to take a look. But reverse engineering an app on one's own could be violating its use. Intent would have to be determined in a court of law, would it not?

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 14 Mar 2019 @ 1:22pm

          Re: Re: Re:

          Which is not what happened here. Nothing was reverse engineered, he looked at what was publicly available for anyone to view and accessed the app with API's that the developer had left open for anyone to use.

          It's akin to putting all that information on their website and making each user account its own unique URL that you can access by typing in different letters in the URL.

          link to this | view in chronology ]

        • identicon
          Anonymous Coward, 14 Mar 2019 @ 1:28pm

          Re: Re: Re:

          Violations of use are not illegal. All they can do is ban you from using it.

          link to this | view in chronology ]

        • identicon
          Baron von Robber, 14 Mar 2019 @ 1:29pm

          Re: Re: Re:

          Uh, he used APIs.

          link to this | view in chronology ]

        • identicon
          Baron von Robber, 14 Mar 2019 @ 1:30pm

          Re: Re: Re:

          Oh and there was no 'reverse engineering'. They got open source code.

          link to this | view in chronology ]

          • icon
            Thad (profile), 14 Mar 2019 @ 2:36pm

            Re: Re: Re: Re:

            Was it open source, or was it just bundled with source code? There's a difference; "open source" has a specific definition that goes beyond just source code availability.

            link to this | view in chronology ]

            • identicon
              Qwertygiy, 14 Mar 2019 @ 3:45pm

              Re: Re: Re: Re: Re:

              You are indeed correct, although the difference between "source-available software" (where the source code is visible) and "open source software" (where the source code can be distributed and modified) wouldn't make a difference in this case, as he wasn't modifying or copying it.

              link to this | view in chronology ]

              • icon
                Thad (profile), 14 Mar 2019 @ 3:57pm

                Re: Re: Re: Re: Re: Re:

                Right, I just think the distinction is an important one for people to know and understand, even if it's not directly pertinent to this story.

                link to this | view in chronology ]

        • identicon
          Anonymous Coward, 14 Mar 2019 @ 2:14pm

          Re: Re: Re:

          "But reverse engineering an app on one's own could be violating its use."

          It depends. Is it a contractual issue? Is it open source? Are EULAs enforceable?

          When one purchases an item, it is theirs to do with as they so please unless they are aware of and agree to the terms which restrict same.

          ianal - jic

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 14 Mar 2019 @ 3:56pm

            And even then, that is a civil matter, not criminal, unless there is unauthorized duplication and copyright laws get involved. As far as I have ever been able to tell, you can't make it illegal for someone to find ways to view your code; you can only make it illegal to copy that code.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 14 Mar 2019 @ 4:23pm

              Re:

              afaik, Non commercial copyright infringement is a civil matter.
              Also, simply making a copy is not "engineering" anything.

              link to this | view in chronology ]

    • identicon
      Anonymous Coward, 14 Mar 2019 @ 12:51pm

      Re:

      Not really.

      Crackers refer to anyone who attempts to break into software applications for any purpose, malicious or not.

      Security researchers may perform cracking functions or not depending on the type of security research they are doing. Sometimes it's hardware related, sometimes software, and sometimes they don't need to crack anything since some idiots were dumb enough to just leave everything open. As in this case.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 14 Mar 2019 @ 1:51pm

        Re: Re:

        Your "crackers" definition makes no sense.

        My personal definition of "cracker" is someone who attempts to defeat security protections (usually DRM) in a piece of software.

        If they're just modifying the software to perform a novel task, they're a hacker, not a cracker.

        As this software had no security protections in place (not even obscurity), it doesn't count as cracking. Since he didn't modify the software to do something novel, it's not hacking.

        All he did was read the code and query the API and do a security analysis of the results. So yeah, this was security research.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 14 Mar 2019 @ 2:38pm

          Re: Re: Re:

          Your "crackers" definition makes no sense.

          Maybe we're saying different words but meaning the same thing? From what I read in your comment, it's no different than what I'm saying, just different words.

          My personal definition of "cracker" is someone who attempts to defeat security protections (usually DRM) in a piece of software.

          I guess I don't see the difference between this and what I said:

          anyone who attempts to break into software applications for any purpose, malicious or not.

          By definition, breaking into software applications would require defeating security protections. I guess I just don't see the difference here other than different words used to say the same thing.

          If they're just modifying the software to perform a novel task, they're a hacker, not a cracker.

          I never said anything about modifying the software, just breaking into it, i.e. defeating security/DRM. That said, that's not really what a hacker is either. From Wikipedia:

          A computer hacker is any skilled computer expert that uses their technical knowledge to overcome a problem.

          More recently it's used in pop culture to refer to people who break into secure systems for nefarious purposes. But in reality it just means somebody really skilled with computers, typically computer programming.

          As this software had no security protections in place (not even obscurity), it doesn't count as cracking.

          Nor did I say it did. That was kind of the point of my entire comment in responding to the OP.

          All he did was read the code and query the API and do a security analysis of the results. So yeah, this was security research.

          Which is also what I said. Note this part of my comment with emphasis added:

          Security researchers MAY perform cracking functions OR NOT depending on the type of security research they are doing.

          link to this | view in chronology ]

          • icon
            Mason Wheeler (profile), 14 Mar 2019 @ 2:51pm

            Re: Re: Re: Re:

            More recently it's used in pop culture to refer to people who break into secure systems for nefarious purposes.

            That's not recent at all. It's been the accepted meaning of the term in common parlance for at least 30 years now. The definition you got from Wikipedia is incredibly dated and no one uses it in that sense anymore outside of Unix culture, which has always had trouble accepting that we're no longer living in the 1970s.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 14 Mar 2019 @ 2:55pm

              Re: Re: Re: Re: Re:

              That's not recent at all.

              Damnit I'm getting old.

              link to this | view in chronology ]

            • identicon
              Anonymous Coward, 14 Mar 2019 @ 4:30pm

              Re: Re: Re: Re: Re:

              "outside of Unix culture, which has always had trouble accepting that we're no longer living in the 1970s."

              Unix is still in use and remains useful for many purposes. Its share of mainframes maybe declining due to Linux replacing those installations where it is more cost effective ... but in no way is Unix going away. What would those old bankers use to run their cobol? I suppose they could recompile on Linux but .....

              link to this | view in chronology ]

              • identicon
                Anonymous Coward, 14 Mar 2019 @ 4:51pm

                Re: Re: Re: Re: Re: Re:

                What would those old bankers use to run their cobol?

                OS/360 or one of its successors. Recompilation is difficult when most of the holes are scrambled by a mouse nest in the card pack, never mind updating the application.

                link to this | view in chronology ]

              • icon
                Mason Wheeler (profile), 15 Mar 2019 @ 7:11am

                Re: Re: Re: Re: Re: Re:

                I didn't say Unix the specific product, I said Unix culture, which definitely includes Linux and other derivatives.

                link to this | view in chronology ]

            • identicon
              Anonymous Coward, 14 Mar 2019 @ 7:57pm

              Re: Re: Re: Re: Re:

              Not at all recent, and still incredibly stupid.

              link to this | view in chronology ]

        • identicon
          Anonymous Coward, 14 Mar 2019 @ 8:44pm

          Re: Re: Re:

          Let's try some definitions

          hacker - one who hacks

          cracker - one who criminally hacks

          gacker/ghacker - one who governmentally hacks

          macker - one who militarily hacks

          jihacker - one who jihadi hacks

          cacker - one who commercially hacks

          quacker - one who quasi hacks

          lacker - one who lazily hacks

          etc., etc., etc.

          link to this | view in chronology ]

        • icon
          PaulT (profile), 15 Mar 2019 @ 1:12am

          Re: Re: Re:

          "My personal definition"

          If you have to make up the meaning of a word in order to believe you're right about something, you may not actually be in the right.

          link to this | view in chronology ]

    • identicon
      Baron von Robber, 14 Mar 2019 @ 12:54pm

      Re:

      No.

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 14 Mar 2019 @ 12:58pm

      Re:

      "Isn't security researcher just a fancy name for cracker?!!"

      I thought Cracker was a term for a Trump voter.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 14 Mar 2019 @ 1:04pm

        Re: Re:

        Its just a politic thing to want to steal words and apply their own twisted meanings to them!

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 14 Mar 2019 @ 1:39pm

          Re: It’s cromulant to me damnit

          Like using the words politic and steal when political and cultural appropriation would better serve your purpose?

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 14 Mar 2019 @ 2:53pm

            Re: Re: It’s cromulant to me damnit

            I have a purpose? Dam nit is right!

            link to this | view in chronology ]

    • identicon
      Anonymous Coward, 14 Mar 2019 @ 10:29pm

      Re:

      No. Next question.

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 15 Mar 2019 @ 8:55am

      Re:

      It really depends on how the information is used.
      Warning the owner of the vulnerability - Security Researcher.
      Posting the vulnerability on 4chan - Cracker.

      Definitely biased but the outcome was already clear to me when MAGA is in the title. Of course they would shoot the messenger. They don't have an issue with attacking journalists.

      link to this | view in chronology ]

  • identicon
    bob, 14 Mar 2019 @ 2:04pm

    who's a snowflake now?

    It seems like people would be more likely to use the app to avoid places where racists hang out than to feel safe wearing a symbol of racism.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 14 Mar 2019 @ 2:22pm

      Re: who's a snowflake now?

      No need for the app - just have a look at how many pickup trucks are in the parking lot and how many have those silly fake balls hanging off the rear end.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 14 Mar 2019 @ 2:59pm

        Re: Re: who's a snowflake now?

        Oh yeah the trucks with balls parked in a parking lot ... like 0 zero? More like hitches for bitches with lots of horses hp.

        link to this | view in chronology ]

  • identicon
    Anonymous Coward, 14 Mar 2019 @ 3:20pm

    It's a TRAPP!!!!

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 14 Mar 2019 @ 4:25pm

      Re: It's a TRAPP!!!!

      That was actually my first thought upon reading about the complete lack of security. If it was on purpose it was to get right-leaning people to sign up and self identify themselves.

      link to this | view in chronology ]

  • identicon
    Rekrul, 14 Mar 2019 @ 5:52pm

    When researchers find security flaws, they should go straight to the press, rather than trying to alert the companies who are likely to threaten them.

    Also, what's stopping anti-Trump people from using the app to find the MAGA crowd's safe spaces? Do you need to have your MAGA-ness tested before you're allowed to use it?

    link to this | view in chronology ]

    • identicon
      Rocky, 14 Mar 2019 @ 6:27pm

      Re:

      When researchers find security flaws, they should go straight to the press, rather than trying to alert the companies who are likely to threaten them.

      Uhm, no. We have the responsible disclosure model which most researchers adhere to, because the alternative you suggest most likely means harm to the public in some way.

      Now, if a company feels like they need to jerk the researchers around we have plenty of examples what happens with those companies - the phrase "Streisand effect" comes to mind among other things.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 14 Mar 2019 @ 9:04pm

        Re: Re:

        Responsible disclosure should be the following:

        Private disclosure should apply only to those companies/organisations/developers who have stated upfront that any security bugs found will be accepted and dealt with appropriately with no repercussions to the reporting person or group.

        If companies/organisations/developers do not have a public statement to that effect, then it is appropriate to report directly to the press or other groups for public dissemination of the security flaws.

        If the public is adversely affected then we should expect that the public will "encouraged" to take their own security more seriously.

        link to this | view in chronology ]

        • identicon
          Rocky, 15 Mar 2019 @ 5:20am

          Re: Re: Re:

          Private disclosure should apply only to those companies/organisations/developers who have stated upfront that any security bugs found will be accepted and dealt with appropriately with no repercussions to the reporting person or group.

          So, if a company hasn't stated upfront how to handle bugs it's okay to publicly report the bugs even though the bug may be due to third-party code from someone that has a stated procedure for handling bugs? It may not be self-evident from the flaw who may be responsible for the code.

          If the public is adversely affected then we should expect that the public will "encouraged" to take their own security more seriously.

          Sometimes the public can't do anything with the affected piece of software/hardware to mitigate the problem. Sometimes the bug can only be fixed by a third party in the short term, just look at Spectre/Meltdown - you as a user of a computer couldn't do anything to be 100% safe except unplugging your computer from the internet which rendered it pretty useless for most tasks until the maintainers of your OS of choice came with a mitigation patch. In this instance though, Intel and AMD had a procedure for handling bugs.

          It's all in the name, Responsible disclosure, which means that the entity doing the disclosure should do it in a responsible fashion so not to unnecessarily expose the public to risks even though the company responsible for fixing it may be asshats.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 21 Mar 2019 @ 3:37pm

            Re: Re: Re: Re:

            If they are going to be supplying services of any kind on the internet, then they should be competent enough or have people who are competent enough to do this before they go online. If they are not, then they have not done their due diligence and they need to accept the consequences thereof. It is a case of "stop killing the messenger" for your own actions. If I create an online system and do not have provision to handle found problems then it is on my head not on the head of someone else who reports the problem. As was pointed out elsewhere, when you accept blame for your own actions, others have a greater difficulty controlling your response. If you blame others for your actions, they have control over you.

            link to this | view in chronology ]

        • icon
          That Anonymous Coward (profile), 15 Mar 2019 @ 5:46am

          Re: Re: Re:

          Ask that guy who found all those dental records on an open server how well that fscking works out...

          link to this | view in chronology ]

      • identicon
        Rekrul, 19 Mar 2019 @ 10:48am

        Re: Re:

        Uhm, no. We have the responsible disclosure model which most researchers adhere to, because the alternative you suggest most likely means harm to the public in some way.

        Now, if a company feels like they need to jerk the researchers around we have plenty of examples what happens with those companies - the phrase "Streisand effect" comes to mind among other things.

        Far too often these days, the normal sequence of events is;

        Person finds security flaw.
        Person reports security flaw to company.
        Company ignores problem and threatens person with criminal charges.
        Story gets media attention.
        Company is forced to deal with problem.

        link to this | view in chronology ]

        • icon
          PaulT (profile), 20 Mar 2019 @ 1:30am

          Re: Re: Re:

          You're being quite charitable there, since you missed out the part where the flaw is usually exploited before they bother to patch it anyway.

          link to this | view in chronology ]

    • icon
      Toom1275 (profile), 14 Mar 2019 @ 11:10pm

      Re:

      He told someone who represents a group known for attacking people who tell them things they don't like to hear something they wouldn't like to hear.

      Doing it in public first is likely for protection, as now the public can see the full exchange from the start, and show clearly that any retaliation that may come against him (such as the standard alt-right harassment campaign, or a Jhonsmith-level fraudulent report to law enforcement) isn't based on him having done any wrong.

      link to this | view in chronology ]

  • icon
    That Anonymous Coward (profile), 14 Mar 2019 @ 9:29pm

    For all the handwringing about how sacred our data is supposed to be, why isn't there a simple law that punishes stupid?

    Instead we lap up the breathless 'ZOMG SUPER HACKERS!!!!!!! SEND THE FEDS!!!!!' (hysterical given the source this time), believe them when they claim nothing bad can happen, & make another security researcher the target of a smear campagin.

    This was so poorly coded, but we have laws allowing the government to black site the person who tried to warn the creator of their fsck ups. Then we 'believe' the story that nothing bad happened (they couldn't figure out how to not hardcode devs login & password into it, we trust their review of if anything happened?) & move on.

    Stop blaming the messengers & start penalties for shit coding.
    Much like DMCA notices need it, when it costs them money to screw up they magically get better at doing it.

    link to this | view in chronology ]

  • icon
    nasch (profile), 15 Mar 2019 @ 10:12am

    Illegal

    "The FBI threat is a threat, I didn’t do anything illegal," he told The Register. "I didn’t break or hack anything. Everything was open."

    Well this guy is an optimist considering someone was going to be prosecuted (he pled out) for updating a URL:

    http://www.coreyvarma.com/2015/01/can-modifying-a-websites-url-land-you-in-prison-under-the-cfa a/

    I have never understood the impulse to take people who are essentially stress-testing your software for free

    This was security analysis, not stress testing.

    https://en.wikipedia.org/wiki/Stress_testing

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.