Stories about robots and their impressive capabilities are starting to crop up fairly often these days. It's no secret that they will soon be capable of replacing humans for many manual jobs, as they already do in some manufacturing industries. But so far, artificial intelligence (AI) has been viewed as more of a blue-sky area -- fascinating and exciting, but still the realm of research rather than the real world. Although AI certainly raises important questions for the future, not least philosophical and ethical ones, its impact on job security has not been at the forefront of concerns. But a recent decision by a Japanese insurance company to replace several dozen of its employees with an AI system suggests maybe it should be:
Fukoku Mutual Life Insurance believes [its move] will increase productivity by 30% and see a return on its investment in less than two years. The firm said it would save about 140m yen (£1m) a year after the 200m yen (£1.4m) AI system is installed this month. Maintaining it will cost about 15m yen (£100k) a year.
The system is based on IBM's Watson Explorer, which, according to the tech firm, possesses "cognitive technology that can think like a human”, enabling it to “analyse and interpret all of your data, including unstructured text, images, audio and video".
The technology will be able to read tens of thousands of medical certificates and factor in the length of hospital stays, medical histories and any surgical procedures before calculating payouts
It's noteworthy that IBM's Watson Explorer is being used by the insurance company in this way barely a year after the head of the Watson project stated flatly that his system wouldn't be replacing humans any time soon. That's a reflection of just how fast this sector is moving. Now would be a good time to check whether your job might be next.
I saw a lot of excitement and happiness a week or so ago around some reports that the EU's new General Data Protection Regulations (GDPR) might possibly include a "right to an explanation" for algorithmic decisions. It's not clear if this is absolutely true, but it's based on a reading of the agreed upon text of the GDPR, which is scheduled to go into effect in two years.
Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which "significantly affect" users. The law will also create a "right to explanation," whereby a user can ask for an explanation of an algorithmic decision that was made about them.
Lots of people on Twitter seemed to be cheering this on. And, indeed, at first glance it sounds like a decent idea. As we've just discussed recently, there has been a growing awareness of the power and faith placed in algorithms to make important decisions, and sometimes those algorithms are dangerously biased in ways that can have real consequences. Given that, it seems like a good idea to have a right to find out the details of why an algorithm decided the way it did.
But it also could get rather tricky and problematic. One of the promises of machine learning and artificial intelligence these days is the fact that we no longer fully understand why algorithms are deciding things the way they do. While it applies to lots of different areas of AI and machine learning, you can see it in the way that AlphaGo beat Lee Sedol in Go earlier this year. It made decisions that seemed to make no sense at all, but worked out in the end. The more machine learning "learns" the less possible it is for people to directly understand why it's making those decisions. And while that may be scary to some, it's also how the technology advances.
So, yes, there are lots of concerns about algorithmic decision making -- especially when it can have a huge impact on people's lives, but a strict "right to an explanation" seems like it may actually create limits on machine learning and AI in Europe -- potentially hamstringing projects by requiring them to be limited to levels of human understanding. The full paper on this does more or less admit this possibility, but suggests that it's okay in the long run, because the transparency aspect will be more important.
There is of course a tradeoff between the representational
capacity of a model and its interpretability, ranging from
linear models (which can only represent simple relationships
but are easy to interpret) to nonparametric methods
like support vector machines and Gaussian processes
(which can represent a rich class of functions but are
hard to interpret). Ensemble methods like random forests
pose a particular challenge, as predictions result from an
aggregation or averaging procedure. Neural networks,
especially with the rise of deep learning, pose perhaps the
biggest challenge—what hope is there of explaining the
weights learned in a multilayer neural net with a complex
architecture?
In the end though, the authors think these challenges can be overcome.
While the GDPR presents a number of problems for current
applications in machine learning they are, we believe,
good problems to have. The challenges described in
this paper emphasize the importance of work that ensures
that algorithms are not merely efficient, but transparent
and fair.
I do think greater transparency is good, but I worry about rules that might hold back useful innovations as well. Prescribing exactly how machine learning and AI needs to work too early in the process may be a problem as well. I don't think there are necessarily easy answers here -- in fact, this is definitely a thorny problem -- so it will be interesting to see how this plays out in practice once the GDPR goes into effect.
As self-driving cars have quickly shifted from the realm of science fiction to the real world, a common debate has surfaced: should your car be programmed to kill you if it means saving the lives of dozens of other people? For example, should your automated vehicle be programmed to take your life in instances where on board computers realize the alternative is the death of dozens of bus-riding school children? Of course the debate technically isn't new; researchers at places like the University of Alabama at Birmingham have been contemplating "the trolley problem" for some time:
"Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that's why you have this switch. But on the alternate track there's more trouble: Your child, who has come to work with you, has fallen down on the rails and can't get up. That switch can save your child or a bus-full of others, but not both. What do you do?"
It's not an easy question to answer, and obviously becomes more thorny once you begin pondering what regulations are needed to govern the interconnected smart cars and smart cities of tomorrow. Should regulations focus on a utilitarian model where the vehicle is programmed to prioritize the good of the overall public above the individual? Or should self-driving cars be programmed to prioritize the welfare of the owner (the "self protective" model)? Would companies like Google, Volvo and others be more or less likely to support the former or the latter for liability reasons?
"Even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves," the authors of the study wrote...The study participants disapprove of enforcing utilitarian regulations for [autonomous vehicles] and would be less willing to buy such an AV," the study's authors wrote. "Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of safer technology."
To further clarify, the surveys found that if both types of vehicles were on the market, most people surveyed would prefer you drive the utilitarian vehicle, while they continue driving self-protective models, suggesting the latter might sell better:
"If both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so," the authors concluded. "… Our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether."
This social dilemma sits at the root of designing and programming ethical autonomous machines. And while companies like Google are also weighing these considerations, if utilitarian regulations mean less profits and flat sales, it seems obvious which path the AV industry will prefer. That said, once you begin building smart cities where automation is embedded in every process from parking to routine delivery, would maximizing the safety of the greatest number of human lives take regulatory priority anyway? What would be the human cost in prioritizing one model over the other?
Granted this is getting well ahead of ourselves. We'll also have to figure out how to change traffic law enforcement for the automated age, have broader conversations about whether or not consumers have the right to tinker with the cars they own, and resolve our apparent inability to adhere to even basic security standards when designing such "smart" vehicles. These are all questions we have significantly less time to answer than most people think.
In what is likely a sign of the coming government-rent-seeking apocalypse, a 19-year-old Stanford student from the UK has created a bot that assists users in challenging parking tickets. The inevitable result of parking nearly anywhere can now be handled with something other than a) meekly paying the fine or b) throwing them away until a bench warrant is issued.
In the 21 months since the free service was launched in London and now New York, Browder says DoNotPay has taken on 250,000 cases and won 160,000, giving it a success rate of 64% appealing over $4m of parking tickets.
Fighting parking tickets is a good place to start, considering most people aren't looking to retain representation when faced with questionable tickets. The route to a successful challenge isn't always straightforward, so it's obviously beneficial to have some guidance in this area -- especially guidance that can determine from a set of pre-generated questions where the flaw in the issued ticket might lie.
Anyone looking for an expansion of chatbots into trickier areas of criminal law are probably going to need to rein in their enthusiasm. There's not much at stake individually in challenging a traffic ticket. The 36% who haven't seen a successful appeal are no worse off than they were in the first place. But it's still better than simply assuming that paying the fine is the only option, especially when the ticket appears to be bogus.
Browder has plans for similar bot-based legal guidance in the future.
Browder’s next challenge for the AI lawyer is helping people with flight delay compensation, as well as helping the HIV positive understand their rights and acting as a guide for refugees navigating foreign legal systems.
The fight against airlines should prove interesting. Generally speaking, most airlines aren't willing to exchange their money for people's time, especially when the situation creating the delay is out of their hands -- which seems to be every situation, whether it's a snowstorm or a passenger confusing math with terrorism. But, if it proves as successful as Browder's first AI assistant, more grumbling from those whose business model has just been interfered with is on the way.
We have computers that can beat us at games like chess and Go (and Jeopardy!), but we haven't seen too many robots that can beat humans at more physical sports like soccer or tennis. We've seen some air hockey robots that are nearly unbeatable, so it's really only a matter of time before robots learn how to play sports with a few more dimensions. Here are some badminton robots that are inching toward playing better than some of us.
Badminton robots are getting better slowly. This robot has binocular vision from two cameras and was built by students at the University of Electronic Science and Technology of China. However, it cheats a little bit by using two rackets....
Robots are getting better at performing complex tasks all the time. It won't be too long before they can drive cars and deliver packages (and replace about a quarter of a million human workers who drive for UPS/FedEx/USPS/etc). The technology isn't quite there yet, but it doesn't seem to be too far off in the future. However, we're nowhere near seeing a Rosie the Robot servant, predicted in the 1960s, but we're getting closer. Check out these marginally helpful robots for the home that could beat flying cars and pneumatic tube transportation to becoming a reality.
The old -- Garbage In, Garbage Out -- GIGO principle originated during the early days of computing, but it may be even more applicable today. With the explosion of data available that can be collected, there's a temptation to assume that analyses and meta-analyses can make sense of all that data and produce incredible insights. However, we should probably have some skepticism before we jump into the deep end of data and expect miraculous results.
I'm going to dispense with any introduction here, because the meat of this story is amazing and interesting in many different ways, so we'll jump right in. Blade Runner, the film based off of Philip K. Dick's classic novel, Do Androids Dream Of Electric Sheep, is a film classic in every last sense of the word. If you haven't seen it, you absolutely should. Also, if you indeed haven't seen the movie, you've watched at least one less film than an amazing artificial intelligence software developed by Terrance Broad, a London-based researcher working on his advanced degree in creative computing.
His dissertation, "Autoencoding Video Frames," sounds straightforwardly boring, until you realize that it's the key to the weird tangle of remix culture, internet copyright issues, and artificial intelligence that led Warner Bros. to file its takedown notice in the first place. Broad's goal was to apply "deep learning" — a fundamental piece of artificial intelligence that uses algorithmic machine learning — to video; he wanted to discover what kinds of creations a rudimentary form of AI might be able to generate when it was "taught" to understand real video data.
The practical application of Broad's research was to instruct the artificial neural network, an AI that is something of a simulacrum of the human brain or thought process, to watch Blade Runner several times and attempt to reconstruct its impression of what it had seen. In other words, the original film is the interpretation of the film through human eyes, while Broad's AI reconstructed what is essentially what the film looks like through the eyes of an artificial intelligence. And if that hasn't gotten your heartrate up a bit, then you and I live on entirely different planets.
The AI first had to learn to discern footage from Blade Runner from other footage. Once it had done that, Broad has the AI "watch" numerical representations of frames from the film and then attempt to reconstruct them into a visual medium.
Once it had taught itself to recognize the Blade Runner data, the encoder reduced each frame of the film to a 200-digit representation of itself and reconstructed those 200 digits into a new frame intended to match the original. (Broad chose a small file size, which contributes to the blurriness of the reconstruction in the images and videos I've included in this story.) Finally, Broad had the encoder resequence the reconstructed frames to match the order of the original film.
Broad repeated the "learning" process a total of six times for both films, each time tweaking the algorithm he used to help the machine get smarter about deciding how to read the assembled data. Here's what selected frames from Blade Runner looked like to the encoder after the sixth training. Below we see two columns of before/after shots. On the left is the original frame; on the right is the encoder's interpretation of the frame.
Below is video of the original film and the reconstruction side by side.
The blur and image issues are due in part to the compression of what the AI was asked to learn from and its response in reconstructing it. Regardless, the output product is amazingly accurate. The irony of having this AI learn to do this via Blade Runner specifically was intentional, of course. The irony of one unintended response to this project was not.
Last week, Warner Bros. issued a DMCA takedown notice to the video streaming website Vimeo. The notice concerned a pretty standard list of illegally uploaded files from media properties Warner owns the copyright to — including episodes of Friends and Pretty Little Liars, as well as two uploads featuring footage from the Ridley Scott movie Blade Runner.
Just a routine example of copyright infringement, right? Not exactly. Warner Bros. had just made a fascinating mistake. Some of the Blade Runner footage — which Warner has since reinstated — wasn't actually Blade Runner footage. Or, rather, it was, but not in any form the world had ever seen.
Yes, Warner Bros. DMCA'd the video of this project. To its credit, it later rescinded the DMCA request, but this project has fascinating implications for the copyright process and its collision with this kind of work. For instance, if the automatic crawlers looking for film footage snagged this automatically, is that essentially punishing Broad's AI for doing its task so accurately that its interpretation of the film so closely matched the original? And, at a more basic level, is the output of the AI even a reproduction copy of the original film, subjecting it to the DMCA process, or is it some kind of new "work" entirely? As the Vox post notes:
In other words: Warner had just DMCA'd an artificial reconstruction of a film about artificial intelligence being indistinguishable from humans, because it couldn't distinguish between the simulation and the real thing.
Other comments have made the point that if the video is simply the visual interpretation of the "thoughts" of an artificial intelligence, then how is that copyrightable? One can't copyright thoughts, after all, only the expression of those thoughts. If these are the thoughts of an AI, are they subject to copyright by virtue of the AI not being "human?" And I'm just going to totally leave alone the obvious subsequent question as to how we're going to define human, because, hell, that's the entire point of Dick's original work.
Broad noted to Vox that the way he used Blade Runner in his AI research doesn't exactly constitute a cut-and-dried legal case: "No one has ever made a video like this before, so I guess there is no precedent for this and no legal definition of whether these reconstructed videos are an infringement of copyright."
It's an as yet unanswered question, but one which will need to be tackled. Video encoding and delivery, like many other currently human tasks, is ripe for the employment of AI of the kind that Broad is trying to develop. The closer that software gets to becoming wetware, questions of copyright will have to be answered, lest they get in the way of progress.
It's a source of wonder and excitement for some, panic and concern for others, and a whole lot of cutting edge work for the people actually making it happen: artificial intelligence, the end-game for computing (and, as some would have you believe, humanity). But when you set aside the sci-fi predictions, doomsday warnings and hypothetical extremes, AI is a real thing happening all around us right now — and achieving some pretty impressive feats: