Harrisburg University Researchers Claim Their 'Unbiased' Facial Recognition Software Can Identify Potential Criminals

from the fresh-hells-delivered-daily dept

Given all we know about facial recognition tech, it is literally jaw-dropping that anyone could make this claim… especially without being vetted independently.

A group of Harrisburg University professors and a PhD student have developed an automated computer facial recognition software capable of predicting whether someone is likely to be a criminal.

The software is able to predict if someone is a criminal with 80% accuracy and with no racial bias. The prediction is calculated solely based on a picture of their face.

There's a whole lot of "what even the fuck" in CBS 21's reprint of a press release, but let's start with the claim about "no racial bias." That's a lot to swallow when the underlying research hasn't been released yet. Let's see what the National Institute of Standards and Technology has to say on the subject. This is the result of the NIST's examination of 189 facial recognition AI programs -- all far more established than whatever it is Harrisburg researchers have cooked up.

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.

Why is this acceptable? The report inadvertently supplies the answer:

Middle-aged white men generally benefited from the highest accuracy rates.

Yep. And guess who's making laws or running police departments or marketing AI to cops or telling people on Twitter not to break the law or etc. etc. etc.

To craft a terrible pun, the researchers' claim of "no racial bias" is absurd on its face. Per se stupid af to use legal terminology.

Moving on from that, there's the 80% accuracy, which is apparently good enough since it will only threaten the life and liberty of 20% of the people it's inflicted on. I guess if it's the FBI's gold standard, it's good enough for everyone.

Maybe this is just bad reporting. Maybe something got copy-pasted wrong from the spammed press release. Let's go to the source… one that somehow still doesn't include a link to any underlying research documents.

What does any of this mean? Are we ready to embrace a bit of pre-crime eugenics? Or is this just the most hamfisted phrasing Harrisburg researchers could come up with?

A group of Harrisburg University professors and a Ph.D. student have developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal.

The most charitable interpretation of this statement is that the wrong-20%-of-the-time AI is going to be applied to the super-sketchy "predictive policing" field. Predictive policing -- a theory that says it's ok to treat people like criminals if they live and work in an area where criminals live -- is its own biased mess, relying on garbage data generated by biased policing to turn racist policing into an AI-blessed "work smarter not harder" LEO equivalent.

The question about "likely" is answered in the next paragraph, somewhat assuring readers the AI won't be applied to ultrasound images.

With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime.

There's a big difference between "going to be" and "is," and researchers using actual science should know better than to use both phrases to describe their AI efforts. One means scanning someone's face to determine whether they might eventually engage in criminal acts. The other means matching faces to images of known criminals. They are far from interchangeable terms.

If you think the above quotes are, at best, disjointed, brace yourself for this jargon-fest which clarifies nothing and suggests the AI itself wrote the pullquote:

“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,” Sadeghian said. “This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”

"Minute features in an image that are highly predictive of criminality." And what, pray tell, are those "minute features?" Skin tone? "I AM A CRIMINAL IN THE MAKING" forehead tattoos? Bullshit on top of bullshit? Come on. This is word salad, but a salad pretending to be a law enforcement tool with actual utility. Nothing about this suggests Harrisburg has come up with anything better than the shitty "tools" already being inflicted on us by law enforcement's early adopters.

I wish we could dig deeper into this but we'll all have to wait until this excitable group of clueless researchers decide to publish their findings. According to this site, the research is being sealed inside a "research book," which means it will take a lot of money to actually prove this isn't any better than anything that's been offered before. This could be the next Clearview, but we won't know if it is until the research is published. If we're lucky, it will be before Harrisburg patents this awful product and starts selling it to all and sundry. Don't hold your breath.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: ai, artificial intelligence, bias, facial recognition, precrime
Companies: harrisburg university


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. icon
    Anonymous Anonymous Coward (profile), 6 May 2020 @ 12:49pm

    We lie, and here is how you know that.

    “We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,”

    Do we now? Please state your peer reviews citations.

    link to this | view in thread ]

  2. icon
    That One Guy (profile), 6 May 2020 @ 1:17pm

    Let's test that shall we?

    For some reason I have the sudden urge to take their miracle technology and feed photos of all the researchers involved, every politician, and every cop they can get their hands on through it.

    After all, if it's capable of pre-crime then should be interesting to see who among that sampling is a criminal just waiting for their chance, and with an 80% accuracy rate well, that's a lot of potential criminals to sort through and find, criminals who will have no excuse if they are flagged by such an amazingly accurate piece of technology since it couldn't possibly be wrong.

    Partial sarcasm aside I can but hope that this is a junk science PR stunt, with the hopes that after a while people will forget the 'junk' half of that description and only remember those involved for making something really cool, because if they actually think that they're created a pre-crime facial recognition program then either they are running a scam that is likely to be all too successful given how eager cops are certain to be for yet another 'violation of rights justification device', or are so delusional that they have bought their own hype.

    link to this | view in thread ]

  3. icon
    Anonymous Anonymous Coward (profile), 6 May 2020 @ 1:44pm

    Re: Let's test that shall we?

    You might want to reconsider politicians and cops in your test sample as they have a tendency to commit crimes (or at least display criminal like behavior) at a greater rate than the general population.

    link to this | view in thread ]

  4. icon
    Stephen T. Stone (profile), 6 May 2020 @ 1:49pm

    You know all that talk about how we should view 1984 as a warning instead of an instruction manual? Minority Report now deserves that same distinction.

    link to this | view in thread ]

  5. identicon
    Anonymous Coward, 6 May 2020 @ 1:56pm

    Re:

    Just lump in the entire cyberpunk genre. Dystopian corporate states should never be an end goal.

    link to this | view in thread ]

  6. identicon
    Anonymous Coward, 6 May 2020 @ 1:59pm

    Define "criminal". After all, if we commit on average 3 felonies a day, then every person is a criminal and their AI fails to recognize that 20% of the pictures actually have people in them.

    link to this | view in thread ]

  7. identicon
    Anonymous Coward, 6 May 2020 @ 2:08pm

    I could do better...

    80% accuracy is weak numbers. I have almost no program writing training or experience (Qbasic and Turbo Pascal in high school) and I'm pretty sure I could write a program that could identify whether or not someone is a criminal with 99% or better accuracy. All it would need to do is be fed a picture, and say "Yes".

    With the complexity of US and state laws, I'd say it's a pretty sure bet that almost every person living in the US has committed, or will commit a crime at least once in their lives.

    link to this | view in thread ]

  8. identicon
    Anonymous Coward, 6 May 2020 @ 2:16pm

    Re: Let's test that shall we?

    Partial sarcasm aside I can but hope that this is a junk science PR stunt,

    Or someone fishing for a big grant.

    link to this | view in thread ]

  9. icon
    ECA (profile), 6 May 2020 @ 2:23pm

    Re: We lie, and here is how you know that.

    “We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,”

    I do like the comment, and have some real ugly types that would NEVER be considered NOT a Criminal.
    Facial recog from computers is about 20% at the MAX, mostly because of Lighting, and angles and Other things that can change how a person looks.
    Emotion detection? is only valid when you KNOW THAT PERSON.. I know allot of people that are MONOTONE, and you cant tell Squat about a joke until they tell you it was a JOKE. And Facial emotions ?? you have GOT to be kidding.

    A friend and I thought about what to wear for Halloween, the Ugly thing that you dont want to get near or...that Nice looking SPIT eating grin of a person in a suit with candy, WHICh would be more scary??

    link to this | view in thread ]

  10. identicon
    Kitsune106, 6 May 2020 @ 2:25pm

    If no bias

    Then running the photos theough a color swapper to to make sure no color or ethic variance should be okay. Or swap skin color... I mean of unbiased then a simple swap should not cause issues, right?

    link to this | view in thread ]

  11. identicon
    Anonymous Coward, 6 May 2020 @ 2:32pm

    Actually the expressions created by your verbal aggression are a good example of a red-flag for future criminal behavior.

    link to this | view in thread ]

  12. identicon
    OGquaker, 6 May 2020 @ 2:42pm

    Re: Tested that.

    I would turn myself in right now, but I'm sequestered at home:(

    I had some FTA warrants a few years ago, brought my toothbrush to 77th Street Police Station late on a Friday; cause jail is cheaper than "bail" (revenue enchantment) but the Cop at the computer lied and sent me back to the white suburbs. I had to wait the seven years to drive again:(

    Another time i got picked up with an "expired license", the judge put me on probation! I kicked my court-appointed attorney and blurted out "Civil", the judge paused and said "civil" and dropped my probation. High white cheeks are a blessing.

    link to this | view in thread ]

  13. identicon
    Anonymous Coward, 6 May 2020 @ 2:50pm

    how old?

    How old were the newborns and what hospitals were involved?

    link to this | view in thread ]

  14. icon
    That Anonymous Coward (profile), 6 May 2020 @ 2:53pm

    Y'all missed something that was under the fold...

    PHD Candidate and veteran NYPD.

    https://twitter.com/dancow/status/1257824523585536000

    I of course wanted to see the results when we gave the system the pictures of the cops who anally violated a detainee, the serial killer CBP agents, oh and those TSA guys who ran drugs & weapons.

    link to this | view in thread ]

  15. identicon
    Agammamon, 6 May 2020 @ 3:07pm

    Psuedoscience is never refuted, it just mutates in search of the next grant.

    link to this | view in thread ]

  16. identicon
    Whoever, 6 May 2020 @ 3:09pm

    Minority Report

    Minority Report wasn't supposed to be an instruction manual. If anything, it showed the dangers of attempting to predict crime.

    link to this | view in thread ]

  17. identicon
    Agammamon, 6 May 2020 @ 3:14pm

    The software is able to predict if someone is a criminal with 80% accuracy and with no racial bias. The prediction is calculated solely based on a picture of their face.

    Here's the deal - what does 'being a criminal' even mean?

    What about an adulterer? That's illegal in some places. Would a picture of a person who cheated on their spouse in a place where that's not illegal get a pass while it would detect the 'criminality' of someone who did it in a jurisdiction where its illegal?

    Or two pictures of the same person, an earlier picture of a cheater in a jurisdiction where its not illegal and a later picture of the person from the jurisdiction where it is illegal?

    link to this | view in thread ]

  18. identicon
    Anonymous Coward, 6 May 2020 @ 3:41pm

    ""Minute features in an image that are highly predictive of criminality." And what, pray tell, are those "minute features?"

    Microexpressions of aggression, contempt, intransigence, hostility, SNARK, etc. Sounds like a very viable software actually. Takes "nothing to hide, nothing to fear" to a whole new level.

    link to this | view in thread ]

  19. identicon
    Anonymous Coward, 6 May 2020 @ 3:47pm

    Let's grant the conceit of this piece of bad fiction and assume as given that this software can predict who is going to be a criminal.

    This is going to help police prevent crime in what way, exactly?

    link to this | view in thread ]

  20. identicon
    Anonymous Coward, 6 May 2020 @ 3:53pm

    The New Phrenology

    Back in the 1800's they claimed that you could tell if a person was a criminal by how they looked. Until recently, it was considered nonsense. (But there is money involved, so now we have phrenology 2.0.)

    I can make a system that is 100% correct. Since everyone is guilty of something, you mark everyone as a criminal. Done. Now pay me.

    link to this | view in thread ]

  21. identicon
    Bobvious, 6 May 2020 @ 4:02pm

    Speaking of Clearview

    According to some embargoed pre-print that I just happen to have access to, it seems the new Harrisburg system will be called ClearBreach.

    link to this | view in thread ]

  22. identicon
    Anonymous Coward, 6 May 2020 @ 4:05pm

    We all break at least 3 laws a day, on average correct?

    So going with the statements that have been made (that there are so many laws, good and bad, on the books), saying that everyone breaks at least 3 laws a day...

    So by extrapolation, EVERYONE will be a criminal at some point in their life (by breaking some law they may not even be aware of, pulled a tag off a mattress... busted just kidding but you know what I mean).

    My new AI can predict with 150% accuracy whether or not someone will be a criminal at some point in some country in their life... spoiler everyone will be a criminal at some point...

    Ok, cops pay me all the money now, all your bases are belonging to us...

    link to this | view in thread ]

  23. icon
    Bruce E (profile), 6 May 2020 @ 4:40pm

    Mugshots

    What are the chances that the training set for criminals is based on mugshots?

    link to this | view in thread ]

  24. identicon
    Anonymous Coward, 6 May 2020 @ 5:23pm

    Re: The New Phrenology

    The sad and crazy thing is they could have gotten way better numbers if they just made it to recognize a mugshots or active warrants database.

    Even with the known limitations and issues of facial recognition making it dubious (issues with large data sets of examples exceeding their resolution) that would be a way better idea on so many levels. But it seems bias laundering is the main market for machine learning for law enforcement.

    link to this | view in thread ]

  25. identicon
    Anonymous Coward, 6 May 2020 @ 5:31pm

    Re: We all break at least 3 laws a day, on average correct?

    everyone breaks at least 3 laws a day...

    The quote was 3 felonies per day. If you count every instance of law-breaking in a day, including multiple violations of the same law, you'll get a much bigger number. That would include misdemeanors, traffic law violations, etc.

    link to this | view in thread ]

  26. icon
    Eldakka (profile), 6 May 2020 @ 5:46pm

    The software is able to predict if someone is a criminal with 80% accuracy and with no racial bias.

    We don't need any sort of predictive facial or other sort of system to come up with 80% accuracy.

    Based on the incredible number of laws, ranging from local to state to federal to international, it's nearly impossible to not break some sort of law on a daily basis. I probably break several traffic laws on my daily commute to work. Daily internet activity probably breaks some law somewhere - most likely copyright.

    Therefore, picking any random sampling of any group of people, the chances are that 80% of them have committed some sort of crime today - ranging from speeding, turning without indicating, reading an article online, listening to a downloaded (and downloading!) an .mp3, to, hell, exceeding autrhorised access to a compiuter system in some readings of that law - let alone in their entire lifetimes.

    link to this | view in thread ]

  27. identicon
    Pixelation, 6 May 2020 @ 5:47pm

    "... they can extract minute features in an image that are highly predictive of criminality"

    It's the beady little eyes. Windows to the criminal soul.

    They should try this software on Senators and see what happens.

    link to this | view in thread ]

  28. icon
    That One Guy (profile), 6 May 2020 @ 8:24pm

    'First you said it was terrible, now it's great, which is it?'

    That may be true, however the point of running them through the system is that it becomes somewhat more difficult for them to support a system for 'accurately spotting criminals' if people can dig up quotes not too long before regarding them objecting to the fact that it flagged them as criminals and how that simply must be a mistake.

    link to this | view in thread ]

  29. identicon
    AnonyOpS, 6 May 2020 @ 10:45pm

    Pseudocrimes, ememecrimes and metacrimes are going through the roof this year.

    Karen #C19 #EpsteinDidNotKillHimself #LiuBingDidNotKillHimself #ChinaDidNothingWrong #MurderHornets #NYCOrganHarvest

    link to this | view in thread ]

  30. icon
    PaulT (profile), 7 May 2020 @ 1:44am

    Re:

    I do love it when people use hashtags in an environment where hashtags are useless.

    link to this | view in thread ]

  31. icon
    Stephen T. Stone (profile), 7 May 2020 @ 2:31am

    In fairness, you can use hashtags properly in Markdown. But you have to escape the octothorpe with a backslash (\) if you place one at the beginning of a line.

    #JustMarkdownThings

    link to this | view in thread ]

  32. identicon
    Rocky, 7 May 2020 @ 3:03am

    Re:

    Are you assuming that these type of folks do due diligence to get things right?

    Shouldn't you know better at this point? :)

    link to this | view in thread ]

  33. icon
    Zane (profile), 7 May 2020 @ 3:22am

    Press release removed

    Looks like they've removed the press release: https://harrisburgu.edu/hu-facial-recognition-software-identifies-potential-criminals/
    From experience, people who write press releases sometimes exaggerate to make the story more interesting, and they don't always wait to get the researchers approval before publicising. It's terrifying how much Comms departments and journalists will stretch the facts. I've had my work inaccurately described in the past. The issue could be more about bad journalism than bad research. But I'm speculating, based on the removal of the press release, and on the outlandish claims. It will be interesting to see the research once published.

    link to this | view in thread ]

  34. icon
    Scary Devil Monastery (profile), 7 May 2020 @ 4:53am

    Re:

    "Minority Report now deserves that same distinction."

    Worse still. As an AC stated below, this is essentially phrenology - the long-disproven theory that you could predict a persons personality and moral fiber from the topology of their skulls. To my knowledge the last ones to even try to apply that as a "method" was a bunch of third reich quacks under Mengele who wanted a method to find out whether someone who looked like properly teutonic might actually have jewish or romani blood..or worse by far, be a homosexual.

    That a few scientists are desperate for grants should not excuse them for trying to peddle pseudoscientific garbage whose only defendants in modern times consisted of Hitler's Quack Squad.

    link to this | view in thread ]

  35. icon
    Scary Devil Monastery (profile), 7 May 2020 @ 4:59am

    Doesn't sound too hard...

    ...hell, I can identify a criminal just by reading a few facts about them.

    For instance, By using my own methods of deduction I can state with at least 80% accuracy that the Harrisburg researchers mentioned in the OP are fraudulent con men bucking for easy money by selling snake oil and a miracle cure. And are probably libras.

    Amazing what you can tell just by a casual glance, if you know how. You guys think I should patent the method?

    link to this | view in thread ]

  36. icon
    Stephen T. Stone (profile), 7 May 2020 @ 6:32am

    The news release outlining research titled “A Deep Neural Network Model to Predict Criminality Using Image Processing” was removed from the website at the request of the faculty involved in the research. The faculty are updating the paper to address concerns raised.

    Translation:

    We fucked up, and we know we fucked up, but we can’t say “we fucked up”. We also can’t say “oops, we did phrenology”. So accept this long-winded bullshit instead.

    link to this | view in thread ]

  37. icon
    crade (profile), 7 May 2020 @ 6:36am

    I could make that software..

    Picture looks like a human? Return yes..

    link to this | view in thread ]

  38. identicon
    Bobvious, 7 May 2020 @ 7:04am

    Re: Re:

    It's good to know they included all the #necessaryhashtags

    link to this | view in thread ]

  39. identicon
    Anonymous Coward, 7 May 2020 @ 7:06am

    We are all criminals, therefore all the artificial intelligence needs to do is respond to all inquiries with Guilty! - Lock 'em up! That would be 100% accurate! They must have screwed something up.

    Seriously tho - I doubt it can tell the diff between a human face and a dog.

    link to this | view in thread ]

  40. identicon
    Anonymous Coward, 7 May 2020 @ 7:19am

    Re: Let's test that shall we?

    After all, if it's capable of pre-crime then should be interesting to see who among that sampling is a criminal just waiting for their chance, and with an 80% accuracy rate well, that's a lot of potential criminals to sort through and find, criminals who will have no excuse if they are flagged by such an amazingly accurate piece of technology since it couldn't possibly be wrong.

    Be careful what you wish for. There's nothing authoritarians like more than pushing a button and claiming "mission accomplished."

    So what if it flags innocent people as criminals? As long as it avoids flagging the right people, a.k.a. the party in power and their buddies, it's a perfect system in their book. Your questioning of it may actually get you flagged by it as a pre-terrorist.

    Partial sarcasm aside I can but hope that this is a junk science PR stunt

    Or it's just another appeal to power hoping to get some sweet funding and good graces.

    link to this | view in thread ]

  41. identicon
    Anon, 7 May 2020 @ 7:28am

    Surprised? Why?

    This is the community that thinks polygraphs are 100% reliable - even more than computers, obviously since it's a big electric thing with waving needle pens ans moving paper, so it must be scientific. They are considered proof positive anywhere but in court. FBI, CIA, Secret Service, most prosecutors' offices, police forces...

    So should we be surprised the same bunch think there's something reliable to facial recognition, a tech that well-placed makeup or a facial hair or sunglasses can confuse?

    link to this | view in thread ]

  42. identicon
    Anonymous Coward, 7 May 2020 @ 10:02am

    80% ?

    I work with measurement devices. Any sensor that was only 80% accurate would go straight in the garbage.

    link to this | view in thread ]

  43. icon
    Bodger (profile), 7 May 2020 @ 12:39pm

    Yeah, why not?

    How could anything possibly go wrong with automated remote phrenology?

    link to this | view in thread ]

  44. identicon
    Diogenes, 7 May 2020 @ 1:09pm

    Still searching

    First, test the creators of this nonsense.

    link to this | view in thread ]

  45. icon
    Scary Devil Monastery (profile), 8 May 2020 @ 5:51am

    Re:

    "This is going to help police prevent crime in what way, exactly?"

    By pre-emptively locking up anyone identified by the system to be a future criminal, obviously.

    Or fit them with ankle trackers, red-flag them in national police databases, kill their credit ratings, mandate they attend regular "parole" hearings, and blacklist them from any job having anything to do with government or security.

    For a better example on how this might work - or not, as the case may be - google the wiki entry for "social credit score" as it's being tested out in China.

    link to this | view in thread ]

  46. icon
    Scary Devil Monastery (profile), 8 May 2020 @ 6:00am

    Re: 80% ?

    "Any sensor that was only 80% accurate would go straight in the garbage."

    Ah, but that's science for scientific purposes.
    For Law Enforcement all you need is "Yeah, he probably did it. Or will. Whatever, lock him up"*

    The target demographic is regularly in the news for managing to shoot and kill people over not dropping smartphones or remote controls fast enough. I don't think they'll be bothered about a 20% inaccuracy rate. It'd probably be a great improvement over what they currently use to determine whether they should apply lethal force or not.

    link to this | view in thread ]

  47. icon
    nasch (profile), 10 May 2020 @ 9:19am

    Re:

    It's still useless though even if formatted properly, because you can't click on it to see other messages with the same tag like you can on Twitter.

    link to this | view in thread ]

  48. icon
    nasch (profile), 10 May 2020 @ 9:22am

    Re: 80% ?

    What happens when a machine is 80% accurate:

    https://www.youtube.com/watch?v=vBPFaM-0pI8

    Matt Parker and Hannah Fry

    link to this | view in thread ]

  49. icon
    Scary Devil Monastery (profile), 11 May 2020 @ 12:25am

    Re: Re: 80% ?

    "What happens when a machine is 80% accurate..."

    You mean as in that machine then identifying 20% of everyone tested as a criminal? Yeah, rolled out across a larger demographic that may result in some future politician trying to include "1 in 5 people are CRIMININALS. Time to stop getting soft on crime!" in their platform.

    This is why facial recognition tech - or ANY sort of automated algorithm meant to decide "suspicion" can't be trusted. Even a 1% error margin becomes an incredible problem.

    link to this | view in thread ]

  50. icon
    PaulT (profile), 11 May 2020 @ 1:28am

    Re: Re:

    Exactly. The entire reason for using hashtags is so that when you're on a platform like Instagram or Twitter that uses them, you click on to the tag to see other posts with that hashtag. If you use them on a platform that doesn't support that functionality, it's just noise and an indication you don't understand what you're typing.

    link to this | view in thread ]

  51. icon
    PaulT (profile), 11 May 2020 @ 1:34am

    Re:

    "In fairness, you can use hashtags properly in Markdown"

    No, you can't because the formatting isn't the issue. It doesn't matter how you format #JustMarkdownThings because it's still just text. You don't go to other posts that used the hashtag #JustMarkdownThings no matter how much you click on it, and since that's the entire purpose of hashtags, you fail when trying to use them here.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.