Washington's Growing AI Anxiety
from the perhaps-AI-can-help-us-deal-with-AI dept
Most people don't understand the nuances of artificial intelligence (AI), but at some level they comprehend that it'll be big, transformative and cause disruptions across multiple sectors. And even if AI proliferation won't lead to a robot uprising, Americans are worried about how AI and automation will affect their livelihoods.
Recognizing this anxiety, our policymakers have increasingly turned their attention to the subject. In the 115th Congress, there have already been more mentions of “artificial intelligence” in proposed legislation and in the Congressional Record than ever before.
While not everyone agrees on how we should approach AI regulation, one approach that has gained considerable interest is augmenting the federal government's expertise and capacity to tackle the issue. In particular, Sen. Brian Schatz has called for a new commission on AI; and Sen. Maria Cantwell has introduced legislation setting up a new committee within the Department of Commerce to study and report on the policy implications of AI.
This latter bill, the “FUTURE of Artificial Intelligence Act” (S.2217/H.4625), sets forth a bipartisan proposal that seems to be gaining some traction. While the bill's sponsors should be commended for taking a moderate approach in the face of growing populist anxiety, it's not clear that the proposed advisory committee would be particularly effective at all it sets out to do.
One problem with the bill is how it sets the definition of AI as a regulatory subject. For most of us, it's hard to articulate precisely what we mean when we talk about AI. The term “AI” can describe a sophisticated program like Apple's Siri, but it can also refer to Microsoft's Clippy, or pretty much any kind of computer software.
It turns out that AI is a difficult thing to define, even for experts. Some even argue that it's a meaningless buzzword. While this is a fine debate to have in the academy, prematurely enshrining a definition in a statute – as this bill does – is likely to be the basis for future policy (indeed, another recent bill offers a totally different definition). Down the road, this could lead to confusion and misapplication of AI regulations. This provision also seems unnecessary, since the committee is empowered to change the definition for its own use.
The committee's stated goals are also overly-ambitious. In the course of a year and a half, it would set out to “study and assess” over a dozen different technical issues, from economic investment, to worker displacement, to privacy, to government use and adoption of AI (although, notably, not defense or cyber issues). These are all important issues. However, the expertise required to adequately deal with these subjects is likely beyond the capabilities of 19 voting members of the committee, which includes only five academics. While the committee could theoretically choose to focus on a narrower set of topics in its final report, this structure is fundamentally not geared towards producing the sort of deep analysis that would advance the debate.
Instead of trying to address every AI-related policy issue with one entity, a better approach might be to build separate, specialized advisory committees based in different agencies. For instance, the Department of Justice might have a committee on using AI for risk assessment, the General Services Administration might have a committee on using AI to streamline government services and IT infrastructure, and the Department of Labor might have a committee on worker displacement caused by AI and automation or on using AI in employment decisions. While this approach risks some duplicative work, it would also be much more likely to produce deep, focused analysis relevant to specific areas of oversight.
Of course, even the best public advisory committees have limitations, including politicization, resource constraints and compliance with the Federal Advisory Committee Act. However, not all advisory bodies have to be within (or funded by) government. Outside research groups, policy forums and advisory committees exist within the private sector and can operate beyond the limitations of government bureaucracy while still effectively informing policymakers. Particularly for those issues not directly tied to government use of AI, academic centers, philanthropies and other groups could step in to fill this gap without any need for new public expenditures or enabling legislation.
If Sen. Cantwell's advisory committee-focused proposal lacks robustness, Sen. Schatz's call for creating a new “independent federal commission” with a mission to “ensure that AI is adopted in the best interests of the public” could go beyond the bounds of political possibility. To his credit, Sen. Schatz identifies real challenges with government use of AI, such as those posed by criminal justice applications, and in coordinating between different agencies. These are real issues that warrant thoughtful solutions. Nonetheless, the creation of a new agency for AI is likely to run into a great deal of pushback from industry groups and the political right (like similar proposals in the past), making it a difficult proposal to move forward.
Beyond creating a new commission or advisory committees, the challenge of federal expertise in AI could also be substantially addressed by reviving Congress' Office of Technology Assessment (which I discuss in a recent paper with Kevin Kosar). Reviving OTA has a number of advantages: OTA ran effectively for years and still exists in statute, it isn't a regulatory body, it is structurally bipartisan and it would have the capacity to produce deep-dive analysis in a technology-neutral manner. Indeed, there's good reason to strengthen the First Branch first, since Congress is ultimately responsible for making the legal frameworks governing AI as well as overseeing government usage.
Lawmakers are right to characterize AI as a big deal. Indeed, there are trillions of dollars in potential economic benefits at stake. While the instincts to build expertise and understanding first make for a commendable approach, policymakers will need to do it the right way – across multiple facets of government – to successfully shape the future of AI without hindering its transformative potential.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: ai, artificial intelligence, brian schatz, committees, machine learning, maria cantwell, regulation
Reader Comments
Subscribe: RSS
View by: Time | Thread
Algorithms
[ link to this | view in chronology ]
Re: Algorithms
\Unless someone can PAY OFF, the programmers to make the AI lean abit to 1 side of the other..
NOPE.
Until they can Hack it, in some form..nope.
You can even look at the computer voting systems, its almost stupid. No one gets to look at the data, except the 2 groups.. no independent, unbiased consideration.
The systems have been shown to be HACKABLE.. And any independent creater can not get a chance to improve on it, or SHOW that it is possible to protect the system.
Let me say it this way..
the IRS is so OLD TECH, they cant take a corp Tax paper and decide Who paid what to whom and where the money went..
They would need to have a State by state list of all companies and business created, Those that are temporary for Business external purchases that will Soon just disappear. And trying to prove Who did what, and how, is a PAIN.. no pictures of those creating the small QUICK business are ever taken, no finger prints..nothing..
An honest corp is as hard to find as an honest politician..They are out there..but we need to find them
[ link to this | view in chronology ]
Re: Algorithms
It doesn't help for "modern" forms of AI. When you dump 10 million mortgage applications plus metadata (payment history, addresses, etc.) into a machine-learning algorithm, then use it to approve/reject people, nobody's going to be able to tell you why it approved or rejected someone. The training data can't be released because of privacy concerns; if it could, it's way too much to review manually; and if you had everything, you'd still be hard-pressed to explain what's happening (for example, whether a rejection was based in "racism"). And if someone did find an unwelcome correlation, the fix won't be obvious.
[ link to this | view in chronology ]
Re: Algorithms
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
We got saddled with krappy, rushed legislation by panicked lawmakers who needed to "do something."
[ link to this | view in chronology ]
Re:
The paperclip maximizer AI is a good example of this. You can build an AI with the seemingly harmless goal of making the most amount of paperclips possible. As it grows and learns, that AI may take over our metal resources completely to complete its goal, leaving us without a valuable resource.
Obviously this is a farcical exmaple, but the point is it's those less flashy ways that AI is likely to do us harm.
[ link to this | view in chronology ]
Re: Re:
Once we have a GIAI, general intelligence AI,
We won't.
History of an AI project.
Type 1
"I will build an intelligent machine that will mimic the human brain using the latest technology plus a lot of input data plus some (super secret) idea of my own plus a LOT of money"
Outcome - a roomful of hardware that looks impressive on a documentary program - but a few years later the project is in the dustbin of history.
Type 2
"I will build a machine that will perform a difficult task - one that requires intelligence to perform" (examples chess, Go etc)
Outcome 1 Failure
Outcome 2 Success - but the result is an algorithm that allows the application of brute force computation to the problem and true generalisation turns out to be a mirage.
This has happened so many times over the years that I can't see it ever changing.
However soft optimisation algorithms (which is what we are really talking about here) have been extraordinarily successful in specific problems lately it is just the overarching idea of an "electronic brain that is smarter than humans" that is a myth.
[ link to this | view in chronology ]
Re: Re: Re:
Insofar as there is a real threat it comes from incompetent and corrupt politicians allowing weapons or other dangerous things to be controlled by software that is not fit for purpose and has not been properly evaluated by qualified people who know what they are doing.
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re:
AI in real life doesn't really look like it does in science fiction movies. Real life is simultaneously far more mundane and, yet, far weirder.
Take a look at something like Deepfakes. (You probably don't want to Google that if you're at work or in public.) We've got algorithms that can analyze video of a person's face and then use that analysis to replace somebody else's face in a video. Predictably, this is being used for porn, which has some nasty consent issues (both Reddit and Porntube have banned Deepfakes). It's also being used for innocuous, funny videos on YouTube, like one that puts Nicolas Cage in other movies.
But the long-term ramifications of this technology are huge: we're not far off from a day when somebody can take video and audio of a person and use them to create fake video and audio of that person, and have the result be realistic enough that a casual observer can't tell the difference. This has some major ramifications for the "fake news" issue -- both of the "propaganda sites pushing fake stories" variety and the "people see a story that tells the truth and just call it fake news because they don't want to believe it" variety.
That's one of the more interesting examples of modern AI, but there are lots of others, many of which are more mundane or esoteric. We've got self-driving cars on the horizon, which will put people who drive vehicles for a living out of their jobs. We've got video games and social media sites mathematically designed to "maximize engagement" -- that is to say, to exploit human psychology and manipulate people into coming back.
There's also a push for integrating deep learning algorithms into criminal profiling. Properly implemented, this has a lot of potential for good -- but if you just feed a computer a bunch of existing criminal data with no guidance, you're merely codifying the biases already inherent in our criminal justice system. Garbage in, garbage out.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
Like everything else, complete bullshit.
[ link to this | view in chronology ]
Well, it isn't what they claim.
We don't even have the beginnings of AI. The machine learning is still ham fisted and offers more brute force than actual intelligence. Either carefully selected learning or
There have been great strides recently in machine learning (go for instance). But the distance from even the best of those to AI is measured with sticks we don't even have yet.
[ link to this | view in chronology ]
Re: Well, it isn't what they claim.
I'm not sure what SF books you're reading, but AI's been part of the genre for as long as there's been a science fiction genre. What do you think robots are?
AI's everywhere in SF, from the respected "hard SF" authors (Asimov, Clarke) to more mass-appeal stuff (Star Trek, Star Wars).
I don't know what books you're reading. There's a lot of SF that doesn't involve AI or robots, but there's a whole lot that does.
[ link to this | view in chronology ]
Re: Re: Well, it isn't what they claim.
Clarke not that noted for robots, unless you consider HAL the slightly nutjob, and very murderous assistant. That was far from AI. At best it was holographic neural networks designed to assist
the victims. The first attempt was in 1948.
[ link to this | view in chronology ]
Re: Re: Re: Well, it isn't what they claim.
You're right, Clarke was not very well-known for AIs, unless you count his single best-known character.
Bullshit.
In 2010, Hal goes through the Monolith and is transformed, just like Bowman in 2001. The implication is, unambiguously, that the Monolith recognizes Hal's intelligence just as clearly as it does Dave's.
[ link to this | view in chronology ]
Re: Re: Well, it isn't what they claim.
Coming from a life of programming, most are just machines. I would have to classify most as not actually AI. The books written in the last ten years seem to call everything that appears to be advanced programming AI. Which, they are not.
[ link to this | view in chronology ]
Re: Re: Re: Well, it isn't what they claim.
You seem to be taking an extremely narrow definition of "artificial intelligence" -- the idea that it's only AI if it's self-aware and thinks like a human. That's one definition, sure, but I don't think it's what we're talking about here; we're talking about deep-learning algorithms, which, if you're "coming from a life of programming", you presumably realize are considered a type of AI in CS.
(That said, if your definition is the narrow SF "self-aware/indistinguishable from humans" version, Daneel was arguably already there in The Caves of Steel (1953), and certainly by Prelude to Foundation (1988), where he successfully impersonates a human for an extended period of time. He still needs some help performing human actions like laughter, but he still makes a more believable human than Tommy Wiseau.)
[ link to this | view in chronology ]
Re: Re: Re: Well, it isn't what they claim.
I've been reading SF since the early 60's. There are few AI/robots as sentient beings. The vast majority are clearly machinery.
David, can I recommend Iain M. Bank's Culture series, with it's extraordinary and unique Minds? Excession in particular explores how even the most sublime AIs can become eccentric, corrupted and politicized.
https://en.wikipedia.org/wiki/Culture_series
[ link to this | view in chronology ]
Re: Well, it isn't what they claim.
I've been reading a few SciFi books, with space battles, etc. and I can say that AI isn't in any of them.
How about "But who can replace a man?" by Brian Aldiss
https://img.4plebs.org/boards/tg/image/1494/26/1494260422851.pdf
[ link to this | view in chronology ]
Re: Re: Well, it isn't what they claim.
[ link to this | view in chronology ]
Re: Well, it isn't what they claim.
The Isaac Asimov robot series, who is also the originator of the three laws of robotic.
[ link to this | view in chronology ]
Re: Re: Well, it isn't what they claim.
They were much closer than quite a few of the newer stories I've read lately. However, only one was "artificial intelligence" in my view. The Bicentienial man.
The rest were explaining how to work within the framework of the three laws.
[ link to this | view in chronology ]
Re: Re: Re: Well, it isn't what they claim.
As to age, well Neil Armstrongs small step ushered in my College years.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
process of creation, and intent of creator.
Can it be through any other means then bias and fallacy?
If that sounds weird- maybe exploring the topics in a cognitive science context will help in understanding what I'm trying to get at- obviously they don't all apply, but many would be fundamental to any useful AI.
https://en.wikipedia.org/wiki/List_of_cognitive_biases
https://en.wikipedia.org/wiki/List_of_falla cies
AI could achieve both wondrous and treacherous things- it's the state of the world and the severe unbalance of power/control/resources that makes it so terrifying, and bleak. It will serve it's masters interest, unfortunately most of those masters, be they government, or corporate, are sycophantic, and/or sociopathic by design- profit/result focused with non-prioritized concern for collateral damage, or unintended consequences. -responsibility for these things has been abstracted away from these groups such that incentives make it not just counterproductive and inefficient to do so, but often a threat to their very livelihood.
AI has the potential to vastly increase this perverse incentive relationship with responsibility. To some, that's a powerful feature, not a bug.
In short- it's the people/systems/institutions that will control it that make AI scary. AI has as much potential to amplify humanities worse traits as it does our best ones- the nature and incentive structure of the people involved make the former much more likely. The investors expect a return.
[ link to this | view in chronology ]
Re: process of creation, and intent of creator.
The one positive conclusive result from all the AI research in the last 40 years is that this kind of approach does not work. How is AI programmed/taught to think? Can it be through any other means then bias and fallacy? If that sounds weird- maybe exploring the topics in a cognitive science context will help in understanding what I'm trying to get at- obviously they don't all apply, but many would be fundamental to any useful AI. You can't "program" these concepts into an AI.
Successful AI approaches rely on exposing a relatively neutral basic algorithm to a very large amount of data and allowing it to identify common patterns and then encode them into its internal memory to create a generalised map of the domain.
Unfortunately this requires that the AI's field of action is comparatively limited.
[ link to this | view in chronology ]
Re: process of creation, and intent of creator.
Whoops - edit fail on my first attempt here - trying again
How is AI programmed/taught to think? Can it be through any other means then bias and fallacy? If that sounds weird- maybe exploring the topics in a cognitive science context will help in understanding what I'm trying to get at- obviously they don't all apply, but many would be fundamental to any useful AI.
The one positive conclusive result from all the AI research in the last 40 years is that this kind of approach does not work. You can't "program" these concepts into an AI.
Successful AI approaches rely on exposing a relatively neutral basic algorithm to a very large amount of data and allowing it to identify common patterns and then encode them into its internal memory to create a generalised map of the domain.
Unfortunately this requires that the AI's field of action is comparatively limited.
[ link to this | view in chronology ]
Re: Re: process of creation, and intent of creator.
Do have a short form reference that shows the positive conclusive results you mention? I'd love to challenge my idea's on this stuff, honestly they're more then a bit unsettling.
[ link to this | view in chronology ]
Re: Re: Re: process of creation, and intent of creator.
The evidence is there simply in the methods that have worked and the methods that have failed.
In short, for those very well defined problems where a large amount of effort has been made it has been found that what has succeeded have been methods that allow brute force computation to be applied. Both Chess and, much more recently, Go have been conquered by programs that were comparatively "dumb". Techniques that attempted to encode human thought processes have been profoundly unsuccessful.
This means that the question that the issue that you raised is essentially irrelevant. AI (at least the kind that works) cannot be programmed directly either to follow human bias or to avoid it. The kind of heuristics that are chosen, where you might think that there was an opportunity to insert bias, have to be chosen because they actually make the system work and on no other grounds (otherwise the system doesn't work).
In other words any bias that exists comes from the way the problem is posed in the first place, or even in which problems are chosen.
However in my opinion the biggest issue is that the ignorant politicians who control these things get suckered by clever salesmen who convince them that the system can do way more than it really can. To the politicians the system is "magic". As Arthur C. Clarke said "any sufficiently advanced technology is indistinguishable from magic", but then again as David Copperfield, David Blane or Dynamo might tell you "any sufficiently clever fraud is indistinguishable from magic" - and they should know - it's how they earn their livings!
[ link to this | view in chronology ]
Re: Re: Re: Re: process of creation, and intent of creator.
In any case- the brain/nervous system of a simple worm was modeled back in 2013; there's even an open source version you can download and put into a lego robot now. There are far more ambitious projects aimed at modeling the entire human brain with billions of dollars in backing. So maybe your categorical interpretation of what AI is, or rather could be, is too limited?
A difficulty with recreating intelligence is that intelligence is necessarily dynamically egocentric, self referential, and recursive- These limitations represent an ouroboros of sorts. Bias is a natural phenomenon- heuristic bias being practically universal in thought processes. No matter the form, it seams impossible for me to imagine that we could create an artificial intelligence that wouldn't suffer from the same issues as our own natural intelligence- likely more so. You can reinvent the wheel, but if it's a decent working design, it's still going to be similar, even if it's out of sight- tucked into a jet, doing mach 2.
The danger as such, may not be so much that a successful AI can recreate intelligence somehow free of these limitations, but that it could redefine it entirely- as out of reach of the individual, but firmly in the grip of it's masters. Even if we had full transparency (a pipe dream), AI will eventually create heuristic bias's that are quantitatively unfeasible to examine; We will accept them because they will seam mostly correct, and far more accurate/efficient then what we can achieve absent such.
A solution that is even 99% correct is still fallacious though- compound tens, or hundreds of thousands of measures, with near ubiquitous application and you have a recipe which could include cascading false positives/negatives. With the right cook that could make anyone look like (or perhaps become) a potential customer, patient, or criminal.
Success isn't always about making something that works as intended- it's about making something useful. Function does not always follow form. Successful AI may not be so much about fixing the inherent flaws in the recipe, as it is finding the most useful cooks. Useful to whom, and for what purposes then becomes the question.
[ link to this | view in chronology ]
None of these commission have ever worked in the past. None of them. They are pointless. Worse than pointless - they'll be captured by political interests almost immediately and will then give harmful advice. And if they don't tell the politicians what the politicians want to hear - they'll just be ignored.
How many times do we have to go through with this? The political process is *incapable* of 'getting ahead of the problem' because - until the problem shows up - the policy makers are *incapable* (and yes, that includes the 'experts' too) of knowing what the problem will be.
Sure, it would be nice if politicians actually educated themselves on the details of the areas they are making policy in. Have they ever? Look at the internet. 20 years of mainstream use and half of them still think its 'a series of tubes'.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
THe future impact of AI is indeed an issue worth resources
https://80000hours.org/career-guide/world-problems/
Not sure the government is best suited to be the one to solve it though.
[ link to this | view in chronology ]
AI does not exit
[ link to this | view in chronology ]
[ link to this | view in chronology ]
AI regulation is EASY!
[ link to this | view in chronology ]
How do we regulate intelligence.
DeVos is eagerly attempting the easy way to accomplish this but there will be some who escape and self educate, what will they do about these miscreants?
[ link to this | view in chronology ]
The problem of AI vs Human
Even with the great strides taken by humanity over the last couple of millenia, we are still no further along in understanding the nature of the universe around us. We may have lots of theories about this universe but, when push comes to shove, we have very little or no understanding of that nature.
A simple example can be seen in the latest efforts to be humans playing Go. The machine that won did so by running at a "clock speed" that was at least a million times faster than the "clock speed" of the human. It had only to process the game, it did not have to process thousands, if not hundreds of thousands, of i/o channels as the human had to handle.
We do not understand how humans process information, especially when humans have such huge numbers of i/o channels providing continuous interprets to the main processor in addition to providing continuous feedback to all of the various control systems found in a biological system.
[ link to this | view in chronology ]
Re: The problem of AI vs Human
The language you use in your first two paragraphs is honestly baffling... What is your frame of reference as to what would constitute understanding intelligence, or the nature of the universe, in a 'meaningful' way? It sounds like an appeal to ignorance, though maybe you've just failed to provide necessary context. Are you a cosmic nihilist or something?
There is considerable ambiguity and scope, in both the definition, and real world use of the word intelligence. Both Albert Einstein, and a simple worm, have a form of intelligence- though obviously those are very different things, the word can apply to both; You wouldn't call a worm 'intelligent life' but the mechanisms that have allowed them to thrive through eons is a rudimentary form of stimulus/response intelligence.
This is a reason some consider things like complex algorithms to be a form of simple artificial intelligence. The fact that slick marketers have failed to differentiate simple AI, from complex (scifi like)AI in the public mind, doesn't mean that simple AI doesn't exist. Tesla's autopilot is one such simple AI, Google analytics is another- That's one that you probably interact with allot every day- if you don't know how to block scripts, you're interacting with it right now.
It's extremely foolish to limit your interpretation of AI to only include high level human-like cognitive skills; you're missing the very real forest that's growing and evolving, weaving its roots through every foundation of humanity, for the idea of 'impossible' trees.
[ link to this | view in chronology ]