NIST Study Of 189 Facial Recognition Algorithms Finds Minorities Are Misidentified Almost 100 Times More Often Than White Men
from the white-dudes-score-another-unearned-perk dept
The development and deployment of facial recognition tech continues steadily, but the algorithms involved don't seem to be getting much better at recognizing faces. Recognizing faces is pretty much the only thing it's expected to do and it can't seem to get the job done well enough to entrust with it things like determining whether or not a person is going to be detained or arrested.
That critical failure hasn't slowed down deployment by government agencies. There are a handful of facial recognition tech bans in place around the nation, but for the most part, questions about the tech are being ignored in favor of the potential benefits touted by government contractors.
Last year, members of Congress started demanding answers from Amazon after its "Rekogition" tech said 28 lawmakers were criminals. Amazon's response was: you're using the software wrong. That didn't really answer the questions raised by this experiment -- especially questions about the tech's disproportionate selection of minorities as potential perps.
This has been a problem with facial recognition tech for years now. Biases introduced into the system by developers become amplified when the systems attempt to match faces to stored photos. A recent study by the National Institute of Standards and Technology (NIST) found that multiple facial recognition programs all suffer from the same issue: an inordinate number of false positives targeting people of color.
Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.
The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.
As usual, the group most likely to be accurately assessed by facial recognition tech is also the group that most often promotes and deploys facial recognition systems.
Middle-aged white men generally benefited from the highest accuracy rates.
That this study was performed by NIST makes it a lot tougher to ignore. While other studies could be brushed off as anomalies or biased themselves (when performed by civil rights activist groups), a federal study of 189 different facial recognition algorithms submitted by 99 companies isn't as easy to wave away as unsound.
One of the more adamant critics of facial recognition tech critics is Amazon. It's the company that told the ACLU it was using the system wrong after the rights group took the system for a spin last year and netted 28 false positives using Congressional reps' photos. Amazon had a chance to prove its system was far more accurate than the ACLU's tests showed it was but it chose to sit out the NIST trials.
The problems found by NIST exist in both "one-to-one" and "one-to-many" matching. False positives for "one-to-one" matching allows unauthorized access of devices, systems, or areas secured with biometric scanners. "One-to-many" mismatches are even more problematic, as they can result in detentions, arrests, and other infringements on people's freedoms.
The number of participants show this problem can't be solved simply by switching service providers. It's everywhere and it doesn't appear to be improving. The DHS wants to subject all travelers in international airports to this tech as soon as possible, which means we'll be seeing the collateral damage soon enough. A few lawmakers want to slow down deployment, but they remain a minority, surrounded by far too many legislators who feel US citizens should beta test facial recognition tech in a live environment with real-world consequences.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: bias, facial recognition, minorities, nist
Reader Comments
Subscribe: RSS
View by: Time | Thread
Fb tag recommended
I've noticed for a few years when I post a photo of a person with dark skin on FB it almost always asks me if I want to tag the first dark-skinned person in my friend list, alphabetically, who is a spokesmodel for an airline. I've considered always approving the recommendation but person would likely get annoyed at me. But it does seem to me that the FB algorithm "thinks they all look the same".
[ link to this | view in chronology ]
Re: Fb tag recommended
Of course you realise why this is, don't you? The system is programmed to see white people as the default for what humans look like. That's why this happens. Sort that out, and the rest of the issues should be easier to resolve.
[ link to this | view in chronology ]
Re: Re: Fb tag recommended
No, that's not it at all. The real reason is far more objective than that: lighting. Faces are curved and contoured enough to cast shadows, and people with lighter skin have more contrast between lit and shadowed parts of their face than people with darker skin, because their lit skin color is simply closer to the color of shadow. This makes distinctive features blur together to the computer's pattern-matching system.
It's unfortunate that that happens, but it's simple "sucks to be them" that's grounded far more in the immutable reality of physics than in any racist intention (or unintended bias) on the part of the people who developed the system.
[ link to this | view in chronology ]
Re: Re: Re: Fb tag recommended
Assume you're right, AC. Why not build in a way to compensate for those immutable facts?
[ link to this | view in chronology ]
Re: Re: Re: Re: Fb tag recommended
Because the real reason is that most/all the programmers are white men, so they code for themselves, not even considering that there are other people in the world. It would be easy enough to add automatic contrast and brightness control, but you have to realize it's needed first.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Fb tag recommended
"the programmers are white men, so they code for themselves, not even considering that there are other people in the world"
Some people work to requirements and some contracts make meeting specified requirements a contractual item, some contracts have a demonstration requirement.
[ link to this | view in chronology ]
Re: Re: Re: Re: Fb tag recommended
How would you suggest that they do that? Should they refuse to make the attempt to identify any black people? Should they intentionally degrade the accuracy of identifying lighter skinned people so the mistakes number the same? If the hypothesis that darker skin inherently makes identification more difficult using the current techniques, what compensation would you suggest?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Fb tag recommended
"How would you suggest that they do that?"
I suggest they use better equipment so that the programmers can develop something that actually has a chance of working.
Why not ask NASA how to photograph something with high contrast?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Fb tag recommended
First, in many applications they're working with photos taken by someone else. Second, there are physical limits to photography. I suppose that when the team that is nerding harder to create unbreakable back-doored cryptography finish with that project, they can task them with this one.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
"they're working with photos taken by someone else."
"there are physical limits to photography"
" I suppose that when the team that is nerding harder ..."
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
"That is one of their biggest problems"
So how do you suggest they solve that, given that by definition the end application will always be examining photos taken by someone else?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
There is no solution.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
"So how do you suggest they solve that, given that by definition the end application will always be examining photos taken by someone else?"
By not using facial recognition technology in the first place. Because much like the DoJ's call for "super-secure backdoored encryption" there's a fundamental logical conflict we don't have the tech to solve.
Facial recognition tech, unlike the encryption example, may eventually find workarounds for people actually having variable proportions of melanin in their skin, but we're not at that point yet. This unreliable party trick shouldn't have left the lab yet.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
So, your solution to a problem you dislike is "forget it exists".
"This unreliable party trick shouldn't have left the lab yet.:"
...and yet it has. Do you have a better way of dealing with it than "wish it didn't happen"? Because that might make you feel warm and fuzzy about the hypothetic alternate reality where that applies, but it doesn't help us in the real world.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
Your comment is understood but that poster is not wrong.
In a sane world, proof of concept programs are not tested upon the unsuspecting public.
Why is it so difficult for these folk to admit they have some more work to do?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommend
At a guess, $$$$$$$$$$$$$$.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
"So, your solution to a problem you dislike is "forget it exists"."
How did you get that from what I said?
I'm saying FRT needs to be debunked. Preferably before, rather than after a triggerhappy SWAT team puts a ventilation hole in the skull of some random bypasser erroneously identified as a bomb-toting terrorist on a "shoot at sight" list.
"...and yet it has. Do you have a better way of dealing with it than "wish it didn't happen"?"
Same way we dealt with eugenics, phrenology, and bite mark forensics? Via a decent debunking and hopefully the next politician wanting to opt for the "convenient" solution of mass surveillance will have to face the fact that the tech they want to employ is actively harmful rather than helpful.
"Because that might make you feel warm and fuzzy about the hypothetic alternate reality where that applies, but it doesn't help us in the real world."
Stop it right there, PaulIT. You are NOT dumb enough to seriously argue that it would have done us no good to put eugenics and phrenology back in the "bad idea" closet.
FRT is NOT reliable, and unless the human race as a species becomes far more inventive when it comes to skull shapes or we invent actual A.I. which is able to guess and estimate like humans do...then it will not become any more reliable than it is now.
Are you...seriously suggesting that since the tech is already in use we should run with it and stop trying to debunk it as the flawed assumption it is?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommend
"How did you get that from what I said?"
Because you said "By not using facial recognition technology in the first place" as your only suggestion? People are going to use it, whether you like it or not.
"I'm saying FRT needs to be debunked."
What do you mean by "debunked"? It's not a theory, it's a technology with many practical applications in many fields. It's not perfect, but then it's only the foolish who think it is right now.
"Preferably before, rather than after a triggerhappy SWAT team puts a ventilation hole in the skull of some random bypasser erroneously identified as a bomb-toting terrorist on a "shoot at sight" list."
You do realise that happens without this tech as well, right? Maybe the problem isn't the tech...
"You are NOT dumb enough to seriously argue that it would have done us no good to put eugenics and phrenology back in the "bad idea" closet."
But, you are dumb enough to conflate these very different issues.
"FRT is NOT reliable"
It is, depending on the purpose it's used for.
"Are you...seriously suggesting that since the tech is already in use we should run with it "
No, I'm saying that it's going to be used whether you like it or not and you need to deal with reality rather than wishing it wasn't happening.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recom
"What do you mean by "debunked"? It's not a theory, it's a technology with many practical applications in many fields. It's not perfect, but then it's only the foolish who think it is right now. "
The problem, you see, is that there are plenty of people whose paychecks and credibility will be riding on rolling this tech out as a new standard sooner rather than later. I'm sure I don't need to point out a few examples.
The "theory" is that this tech works or will eventually work. It doesn't and never will, until humans stop having similar bone structures and skull shapes.
"You do realise that happens without this tech as well, right? Maybe the problem isn't the tech..."
The perception of a police officer that right now the guy down there isn't just guilty of being brown in public but ALSO on a "kill list" as a terrorist bomber certainly won't help. Don't even try to tell me that's not going to make shit THAT much worse. What would it take to convict a police officer over murder if he had a computer saying the guy he just shot was Abul Bakr al-Bagdadi?
"But, you are dumb enough to conflate these very different issues."
Nice of you to go full Blue on me again, PaulIT.
There's no conflation between the theory that a computer algorithm can magically tell you stuff about a person based on what boils down to skull shape, and phrenology...which insists the skull shape tells you important things about the person.
Either counterpoint, concur, or save everyone the trouble by just dipping into the baseless ad hom again so I know whether to give you a minutes worth of effort or not.
"It is, depending on the purpose it's used for."
The purpose it CAN be used for compares to having a child make crayon drawings of getaway cars. Or, rather, having that same child making crayon drawings of EVERY car and then try to tell which ones are getaway cars.
I think the OP itself - and just about every other real-life example of FRT in use - illustrates this point perfectly well.
"No, I'm saying that it's going to be used whether you like it or not and you need to deal with reality rather than wishing it wasn't happening."
So was phrenology and eugenics...until we finally buried the idea that those were worthwhile.
In other words, we can indeed look forward to FRT joining the bad examples of tech use in the hraveyard of history. The only question is how many lives and/or resources will be lost before then.
I'm somehow surprised, given your handle, that you aren't more worried about the paradox of false positives, especially concerning the error rates here.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag r
"The problem, you see, is that there are plenty of people whose paychecks and credibility will be riding on rolling this tech out as a new standard sooner rather than later"
They already have, thus the need for dealing with reality rather than trying to prevent decisions that have already been made.
"The "theory" is that this tech works or will eventually work."
It DOES work. It's just not accurate enough for certain types of application in its current state. The tech is not the problem, it's the way people are trying to use it.
"The perception of a police officer that right now the guy down there isn't just guilty of being brown in public but ALSO on a "kill list" as a terrorist bomber certainly won't help"
That cop already has that perception, the guy is already on the list, and trying to wave away facial recognition will neither remove the guy from the list not train the officer not to be racist. Facial recognition may have the effect of making certain things worse under certain applications, but it's far from the main problem.
"What would it take to convict a police officer over murder if he had a computer saying the guy he just shot was Abul Bakr al-Bagdadi?"
The as it took to convict the police officer who shot Jean Charles de Menezes without facial recognition being involved?
"There's no conflation..."
No, there isn't. Phrenology tries to ascertain personality traits, while facial recognition attempts to tell you their identity. They're completely different things. You're conflating fingerprinting and palmistry here.
That's your problem, you're conflating random shit in order to try and pretend you can just wave away something that's already in place. It's too late for warnings, and like it or not the tech is . useful for many things that the strawman comparisons you depend on did not.
[ link to this | view in chronology ]
Re: Re: Re: Re: Fb tag recommended
I'd recommend continuing to use it. The only "compensation" to a problem in machine learning is more learning. By training the model better through a cycle of feedback, by pointing out failures and teaching it to get things right, the accuracy will improve. But this will never happen if you do what certain cities have been doing: knee-jerk straight to an idiotic reaction of "this system got it wrong!!!!! Ban it and never use it again!!!!!" That's not just the wrong solution, it's the exactly wrong solution. It's literally the polar opposite of the right thing to do here.
Making mistakes and learning from them is the only way these things improve. (They're not that different from human beings in that sense!)
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Fb tag recommended
"The only "compensation" to a problem in machine learning is more learning."
"By training the model better through a cycle of feedback, by pointing out failures and teaching it to get things right, the accuracy will improve."
How does one recover from mistakes made by government, specifically law enforcement? Should mistakes simply be accepted now? You know ... because the they are just learning - sorta like a child. Think about that, we are governed by children?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Fb tag recommended
Why? All of us have a limited ability to observe our environment. We're limited both by geography and by biology, and yet we continue to learn through continued (still-limited!) observation.
So, I say "the way to improve this is to give the system feedback," and you say "if you don't give it feedback, it won't improve," as if that's contradicting me? Sounds to me like you're in total agreement with my point, just looking at it from the opposite side of the proverbial coin.
We already know the basic technology works. That's been proven indisputably through success in other domains. We've got visual recognizing software that is more accurate (and more consistently accurate) at interpreting X-rays than the best human radiologists, for example. Simply because it's failing in one very specific sub-domain (accurately recognizing people with dark skin) doesn't mean the technology is bad; it's already been proven that it works. It means it just needs a bit of polish in that one specific sub-domain.
We've put 50 years of work into getting this far. Throwing it away right as it's beginning to really bear fruit because it's not perfect yet, rather than pushing to get the last 5% right, is the worst possible solution!
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
Because computer learning algorithms aren't at Human level yet. They need massive data centers and ungodly amounts of storage / memory / and CPU time all of which translates to high energy and cooling costs. Compared to a human who's typically running with the equivalent of 12v battery for logic processing and producing around 98.6*F of heat. In short the current computer learning algorithms are far from optimized. Worse, many of them still require complete do-overs if a key assumption changes. So yeah have fun with the required complete do over every 4 - 8 years.
The problem is bias, which is a problem HUMANS haven't completely figured out yet. Case in point, let's say a machine IDd a (insert minority group you don't interact with here) guy as the person fleeing the scene of a gruesome murder of a (insert your group here) (insert your sexual preference here) at 2 AM in a dark alley. Now which one do you think you and your fellow jurors are going to believe? The machine that "ID'd" the perp? Or the accused claiming they didn't do it? What if I say the perp's fingerprints did not match the weapon used to kill the victim? What if I say that the perp has an alibi for where they were at 2AM on the night of the murder? Real Cases aren't perfect CSI / Law and Order - level crimes. Many are based on poor evidence, and the prosecution is always out for a conviction no matter how flimsy the case is. They will happily tell the court that the tagging software is "top of the line" and "proves the accused committed the crime." Many jurors will be uninformed laymen and more susceptible to the perceived authority of the prosecutor. In this case do you really want a computer effectively handing out life ruining guilty verdicts based on a faulty detection algorithm?
There's a huge difference of running a test and handing out blame for a crime. Especially in the US where a twitter mob will convict in the court of public opinion with only an accusation. People's lives are ruined, they can't work, they can't pay their bills, they get shunned everywhere they go, society takes away their membership with a permaban. All on accusation. Let alone what happens with an actual criminal conviction, regardless if it was correct or not. If you think that those twitter mobs won't get access to the tagging software results somehow you're nuts. In short you're effectively handing a non-trivial amount of innocent people a permaban from society all because you want some automated system to do it that has just as much bias as a human detective. What's next? You want to have an automated system handle the entire criminal justice system because humans are too lazy and corruptible? I'm sure that won't get hacked or subverted in anyway.....eye roll
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
Best way to deal with the issue of human bias is to take the human out of the equation and have a rational machine look at it with more a level head. (So to speak.)
You know what's the beautiful thing about a jury? If the verdict isn't unanimous, this innocent person doesn't get convicted. All it takes is for even a single one of those twelve to not be stupid, and this entire scary scenario you've contrived here does not happen.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
But who is programming the machine?
If you, for example, decide to predict where crimes happen by where officers make their arrests, and officers are disproportionately arresting people in black neighbourhoods while letting people in white neighbourhoods off with warnings, what is that going to do to your crime predictions?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
[Asserts facts not in evidence]
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
Exhibit A
Exhibit B
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
Also, I think you need to brush up on the definitions of "assert" and "fact," because that comment asserted nothing, being mostly a hypothetical.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: jury of a homicide trial
I sat for days watching a murder trial, the defendant's Pastor, Mom and we of "Mothers ROC*" sat trying to shame the judge and prosecutor for days.
Bailiff; female
Prosecutor: female
Judge; female
Defendant; Black male
Defense Att; Black male
Jury; Eleven Asians
From the bench, the judge stated (with the jury present) "If I had had a girl, she wouldn't have left her homework in my car!"
One member of the jury dissented, derailing the railroad.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
"Best way to deal with the issue of human bias is to take the human out of the equation and have a rational machine look at it with more a level head. (So to speak.)"
...it gives me a cold shiver down my spine reading that. And not in a good way.
First of all you are assuming the machine will be rational. It won't. It'll be logical. Ask a machine how to solve the problem of pollution and global warming the answer will be "eliminate humans". It's the easiest and most sustainable answer. Ask a machine to ensure a criminal is locked up the answer will be to lock up or eliminate anyone which could have performed the crime. Solve the issue of bigotry permanently? Answer; Eliminate the minority group.
And until we manage to program the concept of "ethics" - and no one even dares to approach that - that's how "rational" a computer is. It performs nothing which is not programmed. It is a perfect idiot savant without the tentative shred of common sense seen in the rain man.
Anyone who asks a machine to make judgments first needs to consider the above. Until we have answers to the above, let us, by all means, keep humans in the equation.
"All it takes is for even a single one of those twelve to not be stupid, and this entire scary scenario you've contrived here does not happen."
Are you saying this in full knowledge of how many cases have gone through a full jury conviction where the main evidence was since disproven (i should say debunked) forensic technology?
Facial Recognition Tech isn't just experimental. It's at the point where no expert today dares say it will EVER become reliable.
Your earlier example of x-ray analysis is a bad example unless we can ensure everyone FCT is supposed to recognize will always be caught exactly a set distance from the camera, with a neutral optimized background in given, similarly optimized lighting conditions, and the aid of a set of screening markers eliminating some 70% of the ambiguities.
FCT is way less substantiated than even the polygraph or bite mark analysis. Unlike the above, however, the jury will believe an algorithm saying there's a "70% match" when the reality should be expressed as "100% wrong".
Let's please not introduce more error sources in the courtrooms, please.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
It is still nowhere near good enough to be deployed where mistakes can lead to people dying in a hail of bullets because the police misidentified someone entering a house, and go guns drawn to try and arrest someone.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
If a machine has a limited ability to observe its environment, more observation will not result in more ability.
"Why?"
[ link to this | view in chronology ]
Re: Re: Re: Re: Fb tag recommended
"Assume you're right, AC. Why not build in a way to compensate for those immutable facts?"
You mean as in; "In this system where you make a computer guess based on algorithms the programmers themselves barely understand, why can't you just make the computer look at people rather than, as is currently, vaguely descriptive pattern blobs of refracted and reflected light?".
It's easier to make a computer see the difference between faces when there are sharp contrasts. That's a fact.
At the end of it it all boils down that facial recognition tech is, at best, an unreliable party trick that all too many people who ought to know better have failed to reject until such a point in time that it becomes reliable.
Facial recognition is a damn silly idea to let outside a lab in the first place. Yes, there's racial bias in it since it'll be even less reliable the softer contrasts your face holds. We can't do much about that at this point in time which is why we simply shouldn't use it.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Fb tag recommended
All new technology has teething troubles. While Facial Recognition has some advantages, it can only assist, not be the final arbiter. That said, I wouldn't rule it out altogether.
I think the trouble we're having with it not is not about low contrasts, it's about false positives and over-reliance. The Shiny New Thing is not necessarily magically right all the time. People who use it would do well to remember it.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Fb tag recommended
"That said, I wouldn't rule it out altogether."
I would. Today a gang member will pull a hood down so he doesn't get recognized by the neighborhood cop. Tomorrow there will be no neighborhood cop, only a camera and the thug knows damn well said camera will never identify him properly as long as he stuffs a swab of cotton under his lips, draws himself a mascara moustache, and wears thick-soled shoes and glasses.
I guarantee there will be several guidelines around on how best to obscure the common tells in FRT as easy and well practiced as the common ways used by drug vendors today to evade police.
The human police officer, however, relies on a lot more than just a face to determine whether a bypasser merits more attention. And until we actually construct a functional A.I. a machine can't follow the cop in this.
"I think the trouble we're having with it not is not about low contrasts, it's about false positives and over-reliance."
I bet you any amount of money that as soon as it is accepted as functional by enough politicians you won't have the time to complain before every human who could be replaced by a camera and computer algorithm, has been.
"All new technology has teething troubles."
Yeah, but this isn't "new" tech. This is a bunch of politicians telling Babbage that they want a machine which outputs the right answers even when the wrong data have been entered - and that no matter what he comes up with in response that machine will become the new standard.
The issue with FRT is quite simple. You can't get consistent data sets. It's too easy for a human being to change every tell and marker used for recognition. No matter how fine you tune the recognition algorithm it does no good when the input is variable.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
Eh, all tech has its limitations. Politicians will be politicians. The best we can do is try to influence them.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
"Eh, all tech has its limitations. Politicians will be politicians. The best we can do is try to influence them."
...and every now and then some tech comes along where your spontaneous reaction is "God, I hope someone tells the politicians what NOT to use this for". Genetics was used by bigots to springboard eugenics into the public debate where it then nurtured racism at the highest level of politics for well over a century.
The GPS-equipped smartphone gave rise to the Data Retention Directive which would have had every citizen in the EU carrying the equivalent of an electronic shackle - until fortunately the ECHR struck that directive down in bolts of thunder.
FRT is a way to make a computer TRY to identify a human and match a face against a list of known miscreants. The paradox of false positives guarantees that this will cause "death by triggerhappy cop" with fair excuses already provided.
Worse, how many citizens will be quietly flagged without their or even the government's notice by an algorithm which thinks it has seen John Doe in sufficiently many compromising places or situations to determine that s/he is a security risk?
In 1984 the hazard with always being monitored was that you never knew when someone was watching through your installed monitor. FRT ensures there always will be something monitoring every step you take in public - and fuck up the conclusion of what you were doing and who you are, without any recourse available to fix whatever register you find yourself in.
We know, cheat sheet in hand, just how badly this pans out from the archives in the old DDR where it was found that in addition to the genuine dissidents a significant proportion of the east german population had been flagged as security risks just because of bad linking.
FRT is an inherent force multiplier for the bad effects of mass surveillance. It needs proper debunking, not a fervent effort to roll it out in the wishful idea that "Hey, someday we'll make it work".
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
Genetics was used by bigots to springboard eugenics into the public debate where it then nurtured racism at the highest level of politics for well over a century.
True, but it's also done a lot of good. None of the good done outweighs the lives lost or ruined by the Eugenics nutters, though. I'm just saying "Don't throw the baby out with the bathwater."
The GPS-equipped smartphone gave rise to the Data Retention Directive which would have had every citizen in the EU carrying the equivalent of an electronic shackle - until fortunately the ECHR struck that directive down in bolts of thunder.
I love it when the courts protect us from politicians being politicians.
FRT is a way to make a computer TRY to identify a human and match a face against a list of known miscreants. The paradox of false positives guarantees that this will cause "death by triggerhappy cop" with fair excuses already provided.
Not in every case, but I get your point.
Worse, how many citizens will be quietly flagged without their or even the government's notice by an algorithm which thinks it has seen John Doe in sufficiently many compromising places or situations to determine that s/he is a security risk?
You mean, "Already have been." The overzealous LEOs and their cohorts in the military-surveillance complex are already doing this to put people on no-fly lists, etc. The thing is, any Shiny New Thing will be abused to Hell and back. This doesn't mean there's a problem with the tech, but with people. Yes, I know the tech makes it easier for them to abuse us. The solution is to put curbs on their powers.
In 1984 the hazard with always being monitored was that you never knew when someone was watching through your installed monitor. FRT ensures there always will be something monitoring every step you take in public - and fuck up the conclusion of what you were doing and who you are, without any recourse available to fix whatever register you find yourself in.
I acknowledge that FRT is a part of the ubiquitous surveillance we live in but it's not the only application. You've forgotten that the only time FRT or surveillance becomes an issue is when you're a target. When you're not, you're just another piece of dried grass in the ever-expanding haystack; sooner or later the data must be parsed. We know from various TD stories that there's not enough manpower to surveil each individual; there are too many of us. Result: they pick and choose who to spy on and unless they can demonstrate that we're a clear and present danger, they usually let it drop. We saw this with a range of terrorist offenders who were being monitored. Basically, the monitoring ended because the suspects just weren't suspicious enough to justify continuing the monitoring until they up and murdered people. So while surveillance apparatus is ubiquitous, the idea that all of us are being carefully watched all the time is inaccurate.
We know, cheat sheet in hand, just how badly this pans out from the archives in the old DDR where it was found that in addition to the genuine dissidents a significant proportion of the east german population had been flagged as security risks just because of bad linking.
A large proportion of DDR's resources was spent on policiing the public. Detailed dossiers were kept on each member of the public and were constantly updated. Basically, more manpower was devoted to spying on the public, which is why it was so effective. The modern surveillance regime is being run on the cheap, which is why so many actual threats slip through the net.
FRT is an inherent force multiplier for the bad effects of mass surveillance. It needs proper debunking, not a fervent effort to roll it out in the wishful idea that "Hey, someday we'll make it work".
If it's only used for surveillance or the only identifier, yes, I agree. But it's not. I get where you're coming from but as I stated earlier, this is a people problem more than a tech problem.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
"I'm just saying "Don't throw the baby out with the bathwater.""
The "baby" would be pattern recognition in computing. That's not what we're talking about. FRT is lamentably all bathwater. The same way eugenics was just an application of genetics - which was defended, in it's the time, with arguments much like the ones I keep seeing about FRT.
"You've forgotten that the only time FRT or surveillance becomes an issue is when you're a target. When you're not, you're just another piece of dried grass in the ever-expanding haystack"
Ever read up on the paradox of false positives? Basically, if your face ends up in the FRT database at all the algorithm's error percentage applies both on whether it fails to link you to your own past history and on whether it erroneously links you to someone else's past history.
In short the trouble with FRT is that if the success rate is 99%, 1% of everyone will be falsely identified as <insert whatever current bogeyman exists>.
Even shorter? When FRT starts applying to the London CCTV you are already a target.
Care to guess how easy it'll be to get authorities to remove you from that list and fix the algorithm? Assuming you even get told and don't just discover that you fail all job applications to government posts or industry sectors considered sensitive.
"The modern surveillance regime is being run on the cheap, which is why so many actual threats slip through the net."
Which means more people, rather than less, will be falsely identified as threats.
"I get where you're coming from but as I stated earlier, this is a people problem more than a tech problem."
If we're talking about a smartphones face recognition feature unlocking for millions of people, none of whom fortunately will be in a position to try, then sure. It's a people problem - whether you choose to rely on a bizarre lock which is hard to hack by any given would-be intruder but will automatically unlock in the presence of people chosen at random.
But when the "tech" is "Facial Recognition Technology" deployed at scale then we have the same issue as when the tech is phrenology or the newly minted SCAN psych profile test.
The only part of that which is a people problem is that of people willing to give credibility to the modern equivalent of the soothsayer and witch hunter.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Fb tag recommended
"not about low contrasts, it's about false positives"
Present day imaging technology has issues with high contrast which contributes to both false positives and false negatives. This is a problem, too bad some folk don't care.
The shiny new thing being right would be a random occurrence and probably not repeatable.
[ link to this | view in chronology ]
Re: Re: Re: Fb tag recommended
Yes and no. Yes to the "lighting," no to the "objective."
For decades, when developing cameras, the way that Kodak calibrated them, made sure that the photos coming out of the printer were true-to-life, was to use "Shirley cards." If Shirley didn't look good in the picture, they had to reconfigure the camera.
The problem, of course, is that "Shirley" was always a white woman. So, if people with dark skin tones didn't get captured properly, the camera shipped anyway. By the time Kodak was using more diverse Shirley cards, digital photography was taking off, so they didn't end up getting used widely.
So, you're right that lighting is the issue, but it would only be "objective" if there weren't a deliberate trade-off happening, a trade-off usually resolved through putting more effort into making sure pale-skinned people don't look washed out than ensuring that dark-skinned people have the lines of their faces illuminated.
[ link to this | view in chronology ]
Re: Re: Re: Re: Fb tag recommended
"but it would only be "objective" if there weren't a deliberate trade-off happening, a trade-off usually resolved through putting more effort into making sure pale-skinned people don't look washed out than ensuring that dark-skinned people have the lines of their faces illuminated."
Sadly there is still the objective reality that contrast differences are more visible against a pale background than against a dark one.
The computer doesn't care whether or not an actor looks or does not look washed up - and arguably Shirley's doctored photos would be a detriment for the computer as well, not being true to reality as depicted. It tries to check lines of shadow to determine facial topography. And this is just easier to do if you're a glaring pale pink, beige or yellow than if you are heavily tanned or brown.
That said even Michael Jackson could give an FRT camera a hard time. Most cases the camera will have to take what it thinks is a face, twist it into what it thinks is the proper angle, adjust what it thinks are the key points of what it believes to be bone structure, eye position, facial borders, nose structure...
At the end of the day it will have to shrug and give a grudging nod of "aye" if enough markers hit a treshold deemed "sufficiently close" in the programming.
And if any facial tissues are puffy from a night of drinking the day before, or due to a head cold, sucks to be you when the local precinct just got alerted by their ever-vigilant computer lapdog that "Dahmer the 2:nd, known bomb-toting maniac" is walking around outside.
I'm thinking that the whole love affair politicians and law enforcement has with cameras will be over as soon as it is determined that they really aren't that useful in identifying the type of people they REALLY wanted to identify in the first place.
At which point in time they'll just start insisting that everyone should have their passport and ssin in the form of an embedded RFID chip.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Fb tag recommended
If that were "the objective reality," then any photo on a beautiful sunny day would have better contrast than one taken near sunrise or sunset. In reality, a photo at the height of a sunny day is generally a washed-out mess, whereas the hour after sunrise or before sunset is referred to by photographers as the "golden hour" for the rich, saturated colours that can be captured. And bright whites captured next to dark shadows tend to be "blown-out" and rendered completely white. Please, look at the photo of that magpie and tell me that the contrast differences in the pale areas of the bird's head are more visible than those in the dark areas.
There is just as much difficulty in capturing the details of a face which is much brighter than the rest of the image, as there is in capturing the details of a face which is much darker than the background it appears against. Both issues are of dynamic range, and both can be solved by adjusting algorithms, so that the camera captures what its designer thinks is important in the image. The fact that facial recognition software has little problem spotting the differences between pale faces (which would look blown-out before algorithmic compensation), as opposed to dark faces (which would look underexposed before algorithmic compensation), tells me a lot about what the designers think is important to capture in the image.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Fb tag recommended
"Please, look at the photo of that magpie and tell me that the contrast differences in the pale areas of the bird's head are more visible than those in the dark areas."
I won't. But then again, I'm human which means I have an entirely different field of vision, understanding, and set of preconceptions than a computer which hasn't a fucking clue about the concept of "color" and "contrast" besides what we can render into math.
"The fact that facial recognition software has little problem spotting the differences between pale faces (which would look blown-out before algorithmic compensation), as opposed to dark faces (which would look underexposed before algorithmic compensation), tells me a lot about what the designers think is important to capture in the image."
You use the word "look" a lot which is why we're having a fundamental misunderstanding. A computer can not "look". It doesn't "see". It doesn't "understand" or "guess".
To a computer the blown-out portion would be a lot easier to work with because the contrast is sharper, even though to a human being it will look like a blurry mess. To the computer the sharp lines it can work with is a good thing where a gradient field - which is what supplies much of the context to a human, is almost worthless.
The computer would analyze that isocahedron in your link and ONLY care about which image would provide a better grid reference.
Here's one of the real issues with FRT. For the computer to make a judgment it has to identify features which aren't easily changed -which don't shift with age or do so predictably, which don't change with disease, exposure to sun and weather, with changed diets, etc. THAT leaves us with a very few unchanging criteria which, lamentably, are almost identical for millions of people.
The computer isn't being asked to separate fingerprints. It's being asked to identify people based on data sets which are far from unique.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
Gah, you're still not getting it.
There is no detail in the white part of the birds head. No detail was captured. Nearly the entire area is the colour with the hex value 0xFFFFFF (pure white). The camera (or, rather, the developers who wrote the program) had to make a choice between capturing the contrast between the white and the black, between different shades of white, and between different shades of black, and made the choice that, in images like this one, the contrast between the darkest black and the brightest white should be preserved. Thus, in that picture, that whole white area was bright enough in contrast to the black area that it was "blown out" - given the maximum brightness value possible, and all detail was lost.
And that's what photography (or any kind of analogue or digital image capture) comes down to: a bunch of trade-offs about what detail is captured, and what detail has to be discarded.
If there is insufficient contrast in images of faces with darker pigmentation to perform facial recognition, that's not because of some "objective reality" that such contrast is harder to capture in darker faces; unless their skin is painted in Vantablack, the variation in light intensity is there to be captured, and is reaching the camera's sensor. If the contrast doesn't show up in the RAW image file, it's because someone made a trade-off somewhere along the line that resulted in that detail being discarded in favour of some other contrast.
And, if the people who wrote the image software were willing to spend the time to make sure that contrasts in facial detail were always captured for people with pale skin colour and not people with darker skin, that's a people problem, not an "objective reality" problem.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Fb tag recommended
Thank you for that explanation.
"If the contrast doesn't show up in the RAW image file, it's because someone made a trade-off somewhere along the line that resulted in that detail being discarded in favour of some other contrast."
So if the bias is hardcoded to a certain range of color density and shade, the next step would be to have the computer add the variable of skin shade before processing the image - and make that call depending on several peripheral variables. I can sort of see the tradeoff may, in addition to inherent bigotry of the sample population, also have a lot to do with processing capacity - and on whether to write a few more million lines of code.
"And, if the people who wrote the image software were willing to spend the time to make sure that contrasts in facial detail were always captured for people with pale skin colour and not people with darker skin, that's a people problem, not an "objective reality" problem."
I can see your point here, I really can.
That, however, does raise numerous questions, most prevalent of which would be "Why on earth isn't this the first known error brought up"?
[ link to this | view in chronology ]
Anamated Minions have five interchangeable faces
Let's back up here.
The Portuguese rounded Africa 500 years ago, burning coastal cities all the way up to the Horn and rounding up slaves to produce sugar, reduced into a drug by horrendous process: one of few 'foods' that cross the blood-brain barrier. Fat and bone structure morphed human faces almost within a generation. This is physiology Not 'Mendel's gene selection'. People in restricted diaspora and/or similar diets look similar, study the Samaritans in Palestine.
Discalmer: My father patented computer image recognition US3010024A in 1961
[ link to this | view in chronology ]
Re: Anamated Minions have five interchangeable faces
This almost feels like you have a very interesting point to make here, but it's a bit too rambling to make sense of. Would you mind elaborating and filling in some of the context?
[ link to this | view in chronology ]
Re: Re: Anamated Minions have five interchangeable faces
Computer image recognition fails without differentiation. After 'rectifying' the subject image (factoring out background, face orientation, size, adjusting chroma et.al.) if what is left matches a million humans in your data set, why bother? Higher/greater resolution (a smaller circle of confusion) is superfluous.
People who have avoided processed foods, or grew their own food for most of their last 50 generations tend to have similar features within their community.
P.S. This is NOT the position of the American Dental Association.
[ link to this | view in chronology ]
Re: Re: Re: Anamated Minions have five interchangeable faces
"After 'rectifying' the subject image (factoring out background, face orientation, size, adjusting chroma et.al.) if what is left matches a million humans in your data set, why bother? Higher/greater resolution (a smaller circle of confusion) is superfluous. "
And this is why i compare FRT to eugenics and phrenology. It's like dumping massive amounts of effort and money into a mechanism meant to trace getaway cars which in the end only brings the end result of identifying anyone driving a red Ford as a suspect.
[ link to this | view in chronology ]
Re: Re: Fb tag recommended
That bias would be there for any programmer. Our ability to tag Chinese Americans probably sucks, but I bet if they went to China they'd be identified correctly by Chinese FR software just as often as American FR software identifies white people correctly.
The human FR "software" is much the same way. The reason the meme "They all look alike!" exists is because different races have different distinguishing features and we grow accustomed to identifying people based on the most common set. When we are suddenly surrounded by people with a different common set of features we can't tell them apart without retraining ourselves. In that respect you could say that the computer FR software is operating as it should. I.e. Like a human.
Of course an Authoritarian doesn't want their buddies to be IDd, only those they view to be the cause of society's ills. Which, in most cases, just so happens to be the people they consider to be "all the same." If that's what they want to enshrine, they could just grab the software developed by nation predominantly run by that group, or they could have their software do what they do: Can't tell them apart? Guilty.
[ link to this | view in chronology ]
'We'd love to, really, but we already have that day planned...'
One of the more adamant critics of facial recognition tech critics is Amazon. It's the company that told the ACLU it was using the system wrong after the rights group took the system for a spin last year and netted 28 false positives using Congressional reps' photos. Amazon had a chance to prove its system was far more accurate than the ACLU's tests showed it was but it chose to sit out the NIST trial.
Because nothing sends a message of confidence in your product and how it's better than the alternatives in the very field you've been defending against critics than refusing to have your product tested right along with those other products.
A few lawmakers want to slow down deployment, but they remain a minority, surrounded by far too many legislators who feel US citizens should beta test facial recognition tech in a live environment with real-world consequences.
Likely because most of them aren't going to have to worry about false positives resulting in a ruined day/week, and even if they are flagged simply providing ID will get them off the hook. Easy to support something when you know you won't be impacted by it.
[ link to this | view in chronology ]
Science is only partly to blame
Well, guess what? The initial prototypes and ideas will be tested in-house with in-house subjects and the ideas that work well enough to bear out implementation and more formal tests are those that work on the typical scientist.
Then we have something working photographically, with reflected light. That favors bright skin with hard features/wrinkles but not indiscriminately much.
So it's not just the training material: it's the whole technology of face recognition that is tuned towards middle-aged white men.
So we have a reasonable tool for recognizing middle-aged white criminals. How do we get from there to this being a problem for colored people?
Because police will not accept a tool that will only pick out white male middle-aged suspects, the confidence levels for "detecting" colored persons are lowered enough that colored people are flagged often enough as well to keep police busy. That this selection turns out a lot more randomized than that of the white suspects is somebody else's problem.
Now facial recognition could be a whole lot better given sufficient controlled lighting or high camera sensitivity and quality. But in the field and for surveillance, small and cheap counts. And in the brewing pots of technology, stuff that appears to work and be mass-producible tends to bubble to the top.
So the whole idea of light/reflection based recognition on cheap devices without extra lighting is inherently biased. If you are mugged in a dark street, even as a human you are more likely to remember a light-skinned attacker from their face than a dark-skinned one (assuming your facial recognition is equally well trained for both) just because there is more to see.
Add to that a likely biased sample set at least for the initial development stage of finding recognisable criteria, and then even if the ultimate training sets are "normalised" to represent different ethnic origins and compositions, the performance will not be.
There is definitely more reliable technology (like fingerprints, DNA signatures etc) that would work for large-scale surveillance purposes but people are not comfortable with large-scale surveillance. Facial recognition is large-scale surveillance, but since it works comparatively unobtrusively and unreliably, people try not to think too much about it or whether they want it or want to tolerate it.
But at some point of time we need to answer the question of what we want and then decide which tools we are going to accept and which not. The only reason facial recognition is a big thing is because it can be stealthy and people can pretend that it does not affect them until it does.
[ link to this | view in chronology ]
Re: Science is only partly to blame
What makes you think that what 'we' want will have any bearing upon what 'law enforcement' wants? These government contractors are pushing technology that could make law enforcement's life easier, and that's not gonna stop until there are consequences for the failure of the technology. Then, those consequences will be monetary, you fools wasted money on something that doesn't work. Then, after a long hard fight where the winning hand could be that the judge(s) get identified as criminals. But it isn't those fools (law enforcement's) money, why should they care?
[ link to this | view in chronology ]
Re: Re: Science is only partly to blame
The consequence will be more innocent people in prison working as slaves - mission accomplished!
[ link to this | view in chronology ]
The DHS wants to pour even more money into unproven tech to benefit some contributor.
Somehow people will believe this makes us safer & that the thousands of people harassed when the match is faulty don't really exist. Huh thats the same bias in the people coding these programs, they forget there are other people than white people.
[ link to this | view in chronology ]
Re:
"Huh thats the same bias in the people coding these programs, they forget there are other people than white people."
Yes. "people guilty of something" in the eyes of quite a few of the more fervent backers of this technology. Which is Father christmas's gift to the alt-right brownshirt brigade for many holiday season's to come.
I can see it now..."You mean no matter who dun it, this wonderful thingamyjig picks out a brown man as the criminal? Dayum! sign mah state up for five thousand a'those!"
[ link to this | view in chronology ]
"That critical failure hasn't slowed down deployment"
the government criminal-justice system is not worried about False-Postives in rounding up bad guys -- politicians just hire more cops and build more prisons.
Everybody is a criminal suspect in modern America (nless you work for the government)
[ link to this | view in chronology ]
Re: "That critical failure hasn't slowed down deployment"
At which point you remove all doubt.
[ link to this | view in chronology ]
It is a well known fact that cameras do not "see" things as well as humans, contrast for example. If the cameras were made to higher standards then maybe they could measure small differences in skin tone given the same lighting and background, but I doubt that is how this is intended to be deployed. Both hardware and software are very deficient in the characteristics required to perform the feat they desire.
[ link to this | view in chronology ]
Coming soon...
Combine deep fakes with algorithmically generated faces to
a) automatically unlock face-locked phones
b) generate perp-shots from phones found at the scene of a crime.
[ link to this | view in chronology ]
Re: Coming soon...
Don't forget the false positives from people wearing photographs of bad guys on their faces like a mask. Then, when everyone is wearing Guy Fawkes masks (get yours here) everyone will be arrested for reasons not immediately comprehensible.
[ link to this | view in chronology ]
Thats what she said..
These tests were done in the 70's when we first had computers running..
There is a solution. but they dont understand it.
[ link to this | view in chronology ]
China is already working on this...
China has already figured out how to solve this issue: Just get someone to supply the data necessary, sot that they can train the algorithms to be able to recognize millions of different black faces.
Whether or not that's a good thing is left as an exercise for the reader.
[ link to this | view in chronology ]
Re: China is already working on this...
Sure, their solution is to jail everyone
[ link to this | view in chronology ]
DR
I was born and raised in the Midwest USA, in my neighborhood it wasn't even one percent dark skinned people, it was one. Like people would want to know which European country you were from so they could compare their whiteness.
i've lived in Mexico for a year, it occurred to me there are alot of American races we don't even consider. I've spent the past two years in the Dominican Republic. My daughter is Dominican. The DR is maybe a good place to work on this type of software. Dominicans consider themselves... Dominican. It's because Dominicans are a mix of several races.
In the USA attitudes are very polarized, there's white, brown and black.
Anyhow I think it's possible that a computer vision system can accurately determine a genetic mix without DNA. Humans are basically software driven robots anyway right? But our computer stuff now is kinda like written in English using math, except humans are the other way around I suppose. Our English and math are written by our computer guts.
[ link to this | view in chronology ]
Re: DR
"In the USA attitudes are very polarized, there's white, brown and black."
More so than elsewhere? There are many more 'attitudes' than just racism.
"a computer vision system can accurately determine a genetic mix without DNA"
Been watching Star Trek again?
"Humans are basically software driven robots anyway right?"
I don't think so, why do you?
Certainly the human eye is much more capable than present day imaging hardware and the human brain is not like a computer.
"But our computer stuff now is kinda like written in English"
Yeah - no other languages are used in other countries to write any code ... for real?
Must have
[ link to this | view in chronology ]
Re: Re: DR
Most or all major programming languages are in English. Some programmers who speak other languages use their native language for the parts they make up, and some write everything in English.
[ link to this | view in chronology ]
Re: Re: Re: DR
"all major programming languages are in English"
This is rather difficult to believe.
[ link to this | view in chronology ]
Re: Re: Re: Re: DR
Why is that?
C
C++
Java
JavaScript
C#
F#
FORTRAN
COBOL
Erlang
Haskell
Pascal
Go
Rust
Scala
PHP
Perl
BASIC
R uby
Python
Kotlin
All with the keywords and SDK names in English. Can you name a programming language that is not in English?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: DR
Yeah, I suppose you are right, there are not any major programming languages written in anything other than english, including the comments.
I read the other day about how China was going to write their own operating system, probably a linux variant but I'm sure the Chinese will only use english in their highly customized os.
I worked with several European outfits on a rather large software system, they tried to use english exclusively, but the comments were not always in english. No big deal.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: DR
I'm not sure if you're being sarcastic or what. Once you bring comments into it, that's a whole different story because now you're talking about what language the end user is using. I'm talking about what the programming language itself is written in. Java, for example. Every keyword is in English. All of the SDK - every class, every variable, every method - is in English. That's what I'm talking about. And I've never heard of a programming language where that is not the case. That's not to say there aren't any - I'm sure there are - but it's likely they're not in any widespread use.
I have no idea about Chinese practices.
Yes, as I said: "Some programmers who speak other languages use their native language for the parts they make up, and some write everything in English."
[ link to this | view in chronology ]
Re: DR
"Humans are basically software driven robots anyway right? But our computer stuff now is kinda like written in English using math, except humans are the other way around I suppose. Our English and math are written by our computer guts."
Hrm. not a good analogy either way.
Human language is derived from a bunch of apes needing to tell one another where the ripe fruit is. It's very ambiguous and you need to run a game of chinese whispers exactly ONE time to realize just how screwed we are in trying to use our respective languages for anything requiring consistent accuracy.
Math is what we invented to describe physical and logical reality in a way which The exact meaning could not be lost by interpretation.
We built computers to understand VERY basic math. Then we hardwired instructions translating english symbols into math operations (assembly language), Then we made vast, cumbersome programs so we could input command codes in formulas (The various programming languages, from Basic and fortran to C++) and make the assembly language pick up the slack and carry out the instruction.
You COULD call a human a software-driven robot (which accepts almost every command as a feedback loop from the hardware). But that's like calling the "Summit" supercomputer an "abacus".
"Anyhow I think it's possible that a computer vision system can accurately determine a genetic mix without DNA."
Unlikely. Putting aside that phenotype varies wildly even without adding epigenetics to the mix - allowing a VERY wide range of appearance variations - a computer can not SEE. nor can we teach it to. It can only subject light patterns to systematic analysis. With facial animation patterns and body posture making up so much of what we use to determine identity it's pretty clear that a computer will never be able to determine identity, let alone genetics, by face analysis alone.
A face isn't a fingerprint - an insignificant area of skin where major differences mean nothing to the functionality. Even small deviations from facial structure generate large functional differences. So how is a computer going to make a good guess on identity when the same exact bone structure could be identical across millions of people - and the soft tissues surrounding it change radically depending on whether you just had a stiff drink, an allergic reaction, or just got back from vacation sporting a heavy tan and salt-damaged skin?
[ link to this | view in chronology ]
Re: Re: DR
A few years ago, the LAPD handed out paper notice that they were enforcing a new rule in North Hollywood that customers could not ware hats in the retail stores, the hats were defeating security cameras for 'shrinkage' (shoplifting). Since we are a radio studio, it fell on deaf ears. Oddly, i have worn a wide-brimmed hat on every California Drivers License for decades.
At my 20 year High school reunion, i recognizes a few people across the room only from their body language, not recalling the name or faces. We went to the same grammar school.
[ link to this | view in chronology ]
Re: Re: Re: DR
What's next ... everyone has to get Donald combover?
[ link to this | view in chronology ]
Re: Re: Re: DR
"At my 20 year High school reunion, i recognizes a few people across the room only from their body language, not recalling the name or faces. We went to the same grammar school."
I don't recall offhand how much of human-human recognition comes from cues other than strictly facial but I believe it's more than 50%. Normally, a face having clearly consistent individual features is highly exceptional. To the point where actors with features out-of-the-norm have their own imdb page. At that point you actually hit the uncanny valley - which is part of why these people are often pigeonholed into "weird" or "scary"character roles.
Actual facial-only recognition comes from a combination of a great many factors. And that is, of course, where wearing a hat, swellings, small damages and, of course, simple aging, will royally screw any computer trying to put a name to a face...which is why, in the end, every identity card in europe is linked to a birth certificate with a set of fingerprints whenever possible.
[ link to this | view in chronology ]
As far as law enforcement tools go, I suppose this one fits well with all their other faulty playthings.
[ link to this | view in chronology ]
"biased themselves (when performed by civil rights activist groups)", bruh c'mon
[ link to this | view in chronology ]