Congressman Wants Stricter Punishment For Criminals Who Make Themselves Easier To Catch

from the excuse-me? dept

There certainly have been a lot of cases lately where some idiotic criminals videotaped their own crimes and put them on YouTube. For some reason, though, this has people thinking that somehow YouTube is to blame. However, even the police seem to recognize that the opposite is true. If anything, stupid criminals putting their own evidence on YouTube makes it so much easier to catch and prosecute the criminals. But, don't tell that to Congressional Representative Mario Diaz-Balart, who is proposing a bill that would push for stiffer penalties against criminals who put their videos on sites like YouTube. In other words, for handing over the evidence to help law enforcement prosecute them, they deserve harsher punishment.
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    jay, 26 Jan 2007 @ 10:01am

    Now that's plain nutty.

    link to this | view in thread ]

  2. identicon
    Sanguine Dream, 26 Jan 2007 @ 10:05am

    Go right ahead...

    Put harsher punishments on them and stop people from being dumb enough to put vids up of their crimes. This is just a case of political grandstanding. Just leave the law alone and continue catching the dummies.

    link to this | view in thread ]

  3. identicon
    shrugger, 26 Jan 2007 @ 10:33am

    Eh -- what's the big deal

    These people are so stupid to begin with there's no way an added penalty for posting the vid will enter into their equation of what they should do.

    Having said that, the law makes no sense anyway.

    link to this | view in thread ]

  4. identicon
    E.T.Cook, 26 Jan 2007 @ 10:41am

    Although I agree to the implications of the law, I think people are failing to realize its catalyst.

    The issue is with individuals trying to seek notoriety with the videos. Thus, the law is supposed to mitigate two things...

    #1. The propensity for crimes to be committed for the sake of creating a video and publishing it.
    #2. Prevent the exploitation of crimes after the fact.

    Although I agree that we should probably leave the status quo alone in this particular case, you guys need to at least TRY to gather some insight as to the purpose of the law before expounding on it in your typical e-expert manner.

    link to this | view in thread ]

  5. identicon
    misanthropic humanist, 26 Jan 2007 @ 11:00am

    the lure of fame

    No. While I agree with the analysis you put forward Mike, I think it's incomplete. Go read TFA and look for the crucial phrase.

    to further humiliate the victims as well as spread fear and intimidation

    The issue here isn't whether people who post videos of crimes on YouTube, and thus incriminate themselves, make their apprehension easier. Sure, the police love it when criminals effectively turn themselves in.

    The real issues are twofold.

    One is whether the "happy slappers" as we call them over here are guilty of a secondary offence, of humiliation. Literally adding insult to injury by posting the videos of the victims on the net is a hard crime to categorise, perhaps it amounts to some twist on defamation, I don't know, it's a hard one to pigeonhole. It's not that the first crime should be treated more severely, but whether an additional crime has been done, some sort of psychological damage to the victim who is shown in the video.

    The second is whether the opportunity given by the YouTube site to propagate these videos to a peer group of teenagers who find them amusing amounts to some measure of encouragement. If the opportunity of (apparently anonymous) dissemination didn't exist then it is doubtful whether the happy-slappers would have the same motivation to carry out the attacks.

    This isn't quite the same as the "guns don't kill people" argument, the same one we would apply to filesharing systems too. In those cases the crime is not motivated by the dissemination system, it is merely faciitated by it. So, without p2p filesharing or guns those crimes of copyright infringement or murder would still happen. In this case it is the attraction of fame and popularity that baits the happy slappers into their crimes. It is an *active* part of the motive.

    As with all "ego-driven" crimes, like website defacement by script kiddies and tagging graffiti where the perpetrator leaves their signature, these idiots drop themselves into the frame on purpose to get the "glory" of their actions.

    I'm not saying that YouTube should bear some legal culpability here, or that it should be shut down or anything stupid. I'm saying the arguments of Diaz-Balart are not as silly as they first seem.

    link to this | view in thread ]

  6. identicon
    Anonymous Coward, 26 Jan 2007 @ 11:01am

    Predator Panic: A Closer Look
    Benjamin Radford



    --------------------------------------------------------------------------------

    “Prote ct the children.” Over the years that mantra has been applied to countless real and perceived threats. America has scrambled to protect its children from a wide variety of dangers including school shooters, cyberbullying, violent video games, snipers, Satanic Ritual Abuse, pornography, the Internet, and drugs.

    Hundreds of millions of taxpayer dollars have been spent protecting children from one threat or other, often with little concern for how expensive or effective the remedies are—or how serious the threat actually is in the first place. So it is with America’s latest panic: sexual predators.

    According to lawmakers and near-daily news reports, sexual predators lurk everywhere: in parks, at schools, in the malls—even in children’s bedrooms, through the Internet. A few rare (but high-profile) incidents have spawned an unprecedented deluge of new laws enacted in response to the public’s fear. Every state has notification laws to alert communities about former sex offenders. Many states have banned sex offenders from living in certain areas, and are tracking them using satellite technology. Other states have gone even further; state emergency leaders in Florida and Texas, for example, are developing plans to route convicted sex offenders away from public emergency shelters during hurricanes. “We don’t want them in the same shelters as others,” said Texas Homeland Security Director Steve McCraw. (How exactly thousands of desperate and homeless storm victims are to be identified, screened, and routed in an emergency is unclear.)

    An Epidemic?
    To many people, sex offenders pose a serious and growing threat—especially on the Internet. Attorney General Alberto Gonzales has made them a top priority this year, launching raids and arrest sweeps. According to Senate Majority Leader Bill Frist, “the danger to teens is high.” On the April 18, 2005, CBS Evening News broadcast, correspondent Jim Acosta reported that “when a child is missing, chances are good it was a convicted sex offender.” (Acosta is incorrect: If a child goes missing, a convicted sex offender is among the least likely explanations, far behind runaways, family abductions, and the child being lost or injured.) On his NBC series “To Catch a Predator,” Dateline reporter Chris Hansen claimed that “the scope of the problem is immense,” and “seems to be getting worse.” Hansen claimed that Web predators are “a national epidemic,” while Alberto Gonzales stated that there are 50,000 potential child predators online.

    Sex offenders are clearly a real threat, and commit horrific crimes. Those who prey on children are dangerous, but how common are they? How great is the danger? After all, there are many dangers in the world—from lightning to Mad Cow Disease to school shootings—that are genuine but very remote. Let’s examine some widely repeated claims about the threat posed by sex offenders.

    One in Five?
    According to a May 3, 2006, ABC News report, “One in five children is now approached by online predators.” This alarming statistic is commonly cited in news stories about prevalence of Internet predators, but the factoid is simply wrong. The “one in five statistic” can be traced back to a 2001 Department of Justice study issued by the National Center for Missing and Exploited Children (“The Youth Internet Safety Survey”) that asked 1,501 American teens between 10 and 17 about their online experiences. Anyone bothering to actually read the report will find a very different picture. Among the study’s conclusions: “Almost one in five (19 percent) . . . received an unwanted sexual solicitation in the past year.” (A “sexual solicitation” is defined as a “request to engage in sexual activities or sexual talk or give personal sexual information that were unwanted or, whether wanted or not, made by an adult.” Using this definition, one teen asking another teen if her or she is a virgin—or got lucky with a recent date—could be considered “sexual solicitation.”) Not a single one of the reported solicitations led to any actual sexual contact or assault. Furthermore, almost half of the “sexual solicitations” came not from “predators” or adults but from other teens—in many cases the equivalent of teen flirting. When the study examined the type of Internet “solicitation” parents are most concerned about (e.g., someone who asked to meet the teen somewhere, called the teen on the telephone, or sent gifts), the number drops from “one in five” to just 3 percent.

    This is a far cry from an epidemic of children being “approached by online predators.” As the study noted, “The problem highlighted in this survey is not just adult males trolling for sex. Much of the offending behavior comes from other youth [and] from females.” Furthermore, “Most young people seem to know what to do to deflect these sexual ‘come ons.’” The reality is far less grave than the ubiquitous “one in five” statistic suggests.

    Recidivism Revisited
    Much of the concern over sex offenders stems from the perception that if they have committed one sex offense, they are almost certain to commit more. This is the reason given for why sex offenders (instead of, say, murderers or armed robbers) should be monitored and separated from the public once released from prison. While it’s true that serial sex offenders (like serial killers) are by definition likely to strike again, the reality is that very few sex offenders commit further sex crimes.

    The high recidivism rate among sex offenders is repeated so often that it is accepted as truth, but in fact recent studies show that the recidivism rates for sex offenses is not unusually high. According to a U.S. Bureau of Justice Statistics study (“Recidivism of Sex Offenders Released from Prison in 1994”), just five percent of sex offenders followed for three years after their release from prison in 1994 were arrested for another sex crime. A study released in 2003 by the Bureau of Justice Statistics found that within three years, 3.3 percent of the released child molesters were arrested again for committing another sex crime against a child. Three to five percent is hardly a high repeat offender rate.

    In the largest and most comprehensive study ever done of prison recidivism, the Justice Department found that sex offenders were in fact less likely to reoffend than other criminals. The 2003 study of nearly 10,000 men convicted of rape, sexual assault, and child molestation found that sex offenders had a re-arrest rate 25 percent lower than for all other criminals. Part of the reason is that serial sex offenders—those who pose the greatest threat—rarely get released from prison, and the ones who do are unlikely to re-offend. If released sex offenders are in fact no more likely to re-offend than murderers or armed robbers, there seems little justification for the public’s fear and the monitoring laws targeting them. (Studies also suggest that sex offenders living near schools or playgrounds are no more likely to commit a sex crime than those living elsewhere.)

    While the abduction, rape, and killing of children by strangers is very, very rare, such incidents receive a lot of media coverage, leading the public to overestimate how common these cases are. (See John Ruscio’s article “Risky Business: Vividness, Availability, and the Media Paradox” in the March/April 2000 Skeptical Inquirer.)

    Why the Hysteria?
    There are several reasons for the hysteria and fear surrounding sexual predators. The predator panic is largely fueled by the news media. News stories emphasize the dangers of Internet predators, convicted sex offenders, pedophiles, and child abductions. The Today Show, for example, ran a series of misleading and poorly designed hidden camera “tests” to see if strangers would help a child being abducted. [1] Dateline NBC teamed up with a group called Perverted Justice to lure potential online predators to a house with hidden cameras. The program’s ratings were so high that it spawned six follow-up “To Catch a Predator” specials. While the many men captured on film supposedly showing up to meet teens for sex is disturbing, questions have been raised about Perverted Justice’s methods and accuracy. (For example, the predators are often found in unmoderated chatrooms frequented by those looking for casual sex—hardly places where most children spend their time.) Nor is it surprising that out of over a hundred million Internet users, a fraction of a percentage might be caught in such a sting.

    Because there is little hard data on how widespread the problem of Internet predators is, journalists often resort to sensationalism, cobbling a few anecdotes and interviews together into a trend while glossing over data suggesting that the problem may not be as widespread as they claim. But good journalism requires that personal stories—no matter how emotional and compelling—must be balanced with facts and context. Much of the news coverage about sexual predation is not so much wrong as incomplete, lacking perspective.

    Moral Panics
    The news media’s tendency toward alarmism only partly explains the concern. America is in the grip of a moral panic over sexual predators, and has been for many months. A moral panic is a sociological term describing a social reaction to a false or exaggerated threat to social values by moral deviants. (For more on moral panics, see Ehrich Goode and Nachman Ben-Yehuda’s 1994 book Moral Panics: The Social Construction of Deviance.)

    In a discussion of moral panics, sociologist Robert Bartholomew points out that a defining characteristic of the panics is that the “concern about the threat posed by moral deviants and their numerical abundance is far greater than can be objectively verified, despite unsubstantiated claims to the contrary.” Furthermore, according to Goode and Ben-Yehuda, during a moral panic “most of the figures cited by moral panic ‘claims-makers’ are wildly exaggerated.”

    Indeed, we see exactly this trend in the panic over sexual predators. News stories invariably exaggerate the true extent of sexual predation on the Internet; the magnitude of the danger to children, and the likelihood that sexual predators will strike. (As it turns out, Attorney General Gonzales had taken his 50,000 Web predator statistic not from any government study or report, but from NBC’s Dateline TV show. Dateline, in turn, had broadcast the number several times without checking its accuracy. In an interview on NPR’s On the Media program, Hansen admitted that he had no source for the statistic, and stated that “It was attributed to, you know, law enforcement, as an estimate, and it was talked about as sort of an extrapolated number.”) According to Wall Street Journal writer Carl Bialik, journalists “often will use dubious numbers to advance that goal [of protecting children] . . . one of the reasons that this is allowed to happen is that there isn’t really a natural critic. . . . Nobody really wants to go on the record saying, ‘It turns out this really isn’t a big problem.’”

    Panicky Laws
    Besides needlessly scaring children and the public, there is a danger to this quasi-fabricated, scare-of-the-week reportage: misleading news stories influence lawmakers, who in turn react with genuine (and voter-friendly) moral outrage. Because nearly any measure intended (or claimed) to protect children will be popular and largely unopposed, politicians trip over themselves in the rush to endorse new laws that “protect the children.”

    Politicians, child advocates, and journalists denounce current sex offender laws as ineffective and flawed, yet are rarely able to articulate exactly why new laws are needed. Instead, they cite each news story about a kidnapped child or Web predator as proof that more laws are needed, as if sex crimes would cease if only the penalties were harsher, or enough people were monitored. Yet the fact that rare crimes continue to be committed does not necessarily imply that current laws against those crimes are inadequate. By that standard, any law is ineffective if someone violates that law. We don’t assume that existing laws against murder are ineffective simply because murders continue to be committed.

    In July 2006, teen abduction victim Elizabeth Smart and child advocate John Walsh (whose murdered son Adam spawned America’s Most Wanted) were instrumental in helping pass the most extensive national sex offender bill in history. According to Senator Orrin Hatch (R-Utah), the bill’s sponsor, Smart’s 2002 “abduction by a convicted sex offender” might have been prevented had his bill been law. “I don’t want to see others go through what I had to go through,” said Smart. “This bill should go through without a thought.” Yet bills passed without thought rarely make good laws. In fact, a closer look at the cases of Elizabeth Smart and Adam Walsh demonstrate why sex offender registries do not protect children. Like most people who abduct children, Smart’s kidnapper, Brian David Mitchell, was not a convicted sex offender. Nor was Adam Walsh abducted by a sex offender. Apparently unable to find a vocal advocate for a child who had actually been abducted by a convicted sex offender, Hatch used Smart and Walsh to promote an agenda that had nothing to do with the circumstances of their abductions. The two high-profile abductions (neither by sex offenders) were somehow claimed to demonstrate the urgent need for tighter restrictions on sex offenders. Hatch’s bill, signed by President Bush on July 27, will likely have little effect in protecting America’s children.

    The last high-profile government effort to prevent Internet predation occurred in December 2002, when President Bush signed the Dot-Kids Implementation and Efficiency Act into law, creating a special safe Internet “neighborhood” for children. Elliot Noss, president of Internet address registrar Tucows Inc., correctly predicted that the domain had “absolutely zero” chance of being effective. The “.kids.us” domain is now a largely ignored Internet footnote that has done little or nothing to protect children.

    Tragic Misdirection
    The issue is not whether children need to be protected; of course they do. The issues are whether the danger to them is great, and whether the measures proposed will ensure their safety. While some efforts—such as longer sentences for repeat offenders—are well-reasoned and likely to be effective, those focused on separating sex offenders from the public are of little value because they are based on a faulty premise. Simply knowing where a released sex offender lives—or is at any given moment—does not ensure that he or she won’t be near potential victims. Since relatively few sexual assaults are committed by released sex offenders, the concern over the danger is wildly disproportionate to the real threat. Efforts to protect children are well-intentioned, but legislation should be based on facts and reasoned argument instead of fear in the midst of a national moral panic.

    The tragic irony is that the panic over sex offenders distracts the public from the real danger, a far greater threat to children than sexual predators: parental abuse and neglect. The vast majority of crimes against children are committed not by released sex offenders but instead by the victim’s own family, church clergy, and family friends. According to a 2003 report by the Department of Human Services, hundreds of thousands of children are abused and neglected each year by their parents and caregivers, and more than 1,500 American children died from that abuse in 2003—most of the victims under four years old. That is more than four children killed per day—not by convicted sexual offenders or Internet predators, but by those entrusted to care for them. According to the National Center for Missing and Exploited Children, “danger to children is greater from someone they or their family knows than from a stranger.”

    If journalists, child advocates, and lawmakers are serious about wanting to protect children, they should turn from the burning matchbook in front of them to face the blazing forest fire behind them. The resources allocated to tracking ex-felons who are unlikely to re-offend could be much more effectively spent on preventing child abuse in the home and hiring more social workers.

    Eventually this predator panic will subside and some new threat will take its place. Expensive, ineffective, and unworkable laws will be left in its wake when the panic passes. And no one is protecting America from that.

    Note
    1. For more on this, see my article “Stranger Danger: ‘Shocking’ TV Test Flawed” here.

    link to this | view in thread ]

  7. identicon
    LOL, 26 Jan 2007 @ 11:42am

    Whoa! Self-promotion! just write a short comment dumb@ss! u wanna write a novel? make a book!

    link to this | view in thread ]

  8. identicon
    Anonymous Coward, 26 Jan 2007 @ 12:12pm

    Re:

    Sorry but it is not my article just thought it was good and on topic. Was not able to just link to it.

    link to this | view in thread ]

  9. identicon
    pongidae, 26 Jan 2007 @ 12:16pm

    Dang - wite much ...

    Anonymous Coward (Benjamin Radford),
    Wow what pompous self promotion and self flagellation, you are obviously very proud of yourself and your ability to write . . . a lot.
    But what was the point of your long winded rant and what did it have to do with punishment for the posting of crimes on YouTube?
    If you wanted to state how this is similar to the reaction to child porn or predators than say that, cite your site, and move on. Child Predators are a real danger but to relate that crime to this article. That crime of the sexual predator would be made even greater if the predator were to record his/her crimes and to upload them on to a file/video share service such as YouTube. Stupid yes but the additional crime/punishment would not be of stupidity but of the glorification of the crime and the further demeaning of the child.

    link to this | view in thread ]

  10. identicon
    billy, 26 Jan 2007 @ 12:25pm

    ha

    that was long and nicely informative
    and, shall i say, expected
    most news is crap to begin with
    they gotta spice things up or they won't have any watchers because most news is too boring
    thats why you usually see largely negative stories such as deaths and abductions and how this or that is going downhill
    i would like news to be all positive instead of just about 20%
    that'd be cool

    and @ # 7
    chill man, did you even read that? its interesting
    and I got a feeling that the poster is not the man who wrote the book
    and, if you notice before the spiel even begins, it already is a written book, so telling whoever it was to make a book, is kind of stupid, but whatever helps you sleep at night

    link to this | view in thread ]

  11. identicon
    billy, 26 Jan 2007 @ 12:30pm

    don't worry

    poster in #6 and 8
    I listen and read, unlike these others who just want to flame because they are too lazy to read and understand

    i did enjoy your long post, even though I already understand you did not write it yourself, I am glad you took the time to copy it here
    I am more likely to believe it than I am the actual news, as I mentioned in my previous post.
    I enjoy reading everyone's opposing opinions on this site and its stories.
    I do get tired of people flaming each other because they don't understand or don't comprehend what a previous poster did / is talking about.
    And its even more upsetting when its about grammar.
    This is the internet.
    If you want to critique grammar, become a book critic and get off the f**kin internet, gosh
    or an English professor

    link to this | view in thread ]

  12. identicon
    Dav, 26 Jan 2007 @ 12:42pm

    This can only make this harder

    If you add disincentives to do this then less will do it and you are arresting less people and solving less problems.

    link to this | view in thread ]

  13. identicon
    Ren, 26 Jan 2007 @ 12:44pm

    Re:

    An interesting article, but somewhat off point. It would seem that this new legislation is not so much aimed at sexual crimes as at general criminal activity (most likely sparked by that whole teen girls kicking ass thing). Worth noting is that this law would really not affect those who prey on children, since posession, creation, and distribution of child pornography is already very much illegal.
    As for the meat of the article itself, while I agree that sensational journalism is misdirecting the American public, and I cannot refute the statistics put forward in the article, I would say that in this case our lawmakers' hearts are in the right place. (Rare, though that may be.) Perhaps it is less likely for a child rapist to strike again than a robber or drug dealer, but the thought of someone having the opportunity to victimize a child more than once is just stomach turning. I would argue that some crimes indicate a mental deviation that is incompatible with society. Sexual assault of a child certainly fits that bill - if someone is capable of it, they probably have something very wrong in their head and always will.
    Also, there was one point in the article discussing the "to catch a predator" series. It said that the trap set for these predators was unrealistic because the sort of chat rooms these people are hanging out in are the places teens rarely go. That seems to be missing the forest for the trees. Where these people meet these young kids is totally beside the point - what matters is that someone is propositioning what they believe to be a young child. It makes me think of the whole "she wanted it" defense. I don't care if a 15 year old is in a sex chat room asking for an encounter - anyone adult who knows that it's a 15 year old should just not engage them. Only the type of person who was looking for that sort of situation in the first place would take the bait.

    link to this | view in thread ]

  14. identicon
    misanthropic humanist, 26 Jan 2007 @ 12:54pm

    predators

    An interesting article #6, thanks. However it's a shame that it's led somewhat offtopic as this subject had the potential for very interesting discussion.

    link to this | view in thread ]

  15. identicon
    Anonymous Coward, 26 Jan 2007 @ 1:10pm

    here here

    link to this | view in thread ]

  16. identicon
    Anonymous Coward, 26 Jan 2007 @ 1:38pm

    the media twist

    Its funny how the media can alway put a different spin on what's happening. It was interesting to see that this one omitted the quote from the Congressman who basically said the whole point was to let the judge take into account the intent of posting the video with respect to the victims. There was another article just recelty that mentioned how "bullys" at school were putting up content of beating others up to embarrase the victim and raise fear in others. I mean, I understand the authors case, but it seems very onesided. Then again media tends to always be that way.

    link to this | view in thread ]

  17. identicon
    Anonymous Coward, 26 Jan 2007 @ 1:58pm

    Accessories

    All persons who view said videos of criminal acts should be procesuted as accesories after the fact and jailed.

    There... Eat that.

    link to this | view in thread ]

  18. identicon
    danimal, 26 Jan 2007 @ 1:58pm

    Not so much...

    This ranks right up there with some other pretty ridiculous things I've read about lately. To all legislators, if you want to do something then DO SOMETHING! This is a thinly veiled attempt at capitalizing on some media momentum and anyone who says otherwise is kidding themselves. I understand events come up that the legislature has to deal with with some degree of speed; however, in most cases (including this one) our system of justice is well-equipped to handle it and we don't need additional laws.

    If you post a video of your crime to YouTube is it really any worse than being captured on surveillance video or just keeping a copy for your own enjoyment.

    Why does the internet change that? What if someone else posted the video...does it still carry the same weight? If there is video that is deemed admissable, don't you think the average sentence is already higher? You remove doubt (reasonable, lingering, potential or otherwise), add visual impact, eliminate the need for plea agreements and so on and what are you left with? Probably a pretty tough sentence.

    I don't think we need another classification of injustice for "plus video". Why not "plus DNA"? Or "plus drugs"? Because these additional components are already factored in to our system of justice and believe it or not - the internet did not create video-taped criminals.

    There is a system in place for people who feel they've been additionally victimized by the publication of their victimization - it's called civil court.

    So, thanks for the try, but since you have the access, dear legislator, why don't you deal with things that we don't already have working solutions to.

    link to this | view in thread ]

  19. identicon
    misanthropic humanist, 26 Jan 2007 @ 3:49pm

    Re: Not so much...

    >If you post a video of your crime to YouTube is it really any worse than being captured on surveillance video or just keeping a copy for your own enjoyment.

    Yeah definitely, if you are perpetrating the crime for the purposes of making the video. It changes everything.

    link to this | view in thread ]

  20. identicon
    danimal, 26 Jan 2007 @ 4:27pm

    How?

    What does it matter the motivation? A crime is a crime, and short of any mitigating circumstances - the motivation is irrelevant.

    If I steal a car for the purpose of making a movie, robbing a bank or just getting to and from work - the fact remains I stole a car and need to be punished.

    These crimes seem more heinous because they are perpetrated against individuals, so our sense of morality says they're worse. That's logical. It's also accounted for already in our courts.

    Personally, the way the courts have acted lately, I think this will have an opposite effect to the desired one. Video'd crime will become the new high-water mark in punishment and everything else will be less severly adjudicated.

    Okay, probably not, but I got you thinking at least. :)

    link to this | view in thread ]

  21. identicon
    danimal, 26 Jan 2007 @ 4:53pm

    The **AA

    The **AA's are probably salivating at this. Now, when some teeny-bopper decides to sing the latest Gwen Stefani track while dancing around her bedroom, then posts it to YouTube - the RIAA can go after her for copyright infringement plus special circumstances because she posted a "video of her crime".

    Awesome!

    link to this | view in thread ]

  22. identicon
    misanthropic humanist, 26 Jan 2007 @ 5:14pm

    Re: How?

    What does it matter the motivation?

    In criminal law the motivation makes a big difference. For example, the facts are that a man is dead and you are found at the scene of the body holding a gun, the weapon used.

    What happened? What next?

    The only thing that lies between your conviction for murder (premeditated killing you had planned for months), and a suspended sentence for manslaugheter (you acted in self defence when a stranger tried to kill you with the gun) is your state of mind immediately before and during the act. You mention "mitigating circumstances", but that consideration is always part of criminal law. For example a starving man who steals a loaf of bread can expect a lesser sentence. That is why judges are granted "equity", or the ability to hand out punishments that "fit the crime". Whenever you see a sentencing term quoted in the media it is nearly always the maximum.

    These crimes seem more heinous because they are perpetrated against individuals, so our sense of morality says they're worse.

    Crime against the person is always more serious than a crime against property. It's not just morality, it's the law in all civillised countries.
    (Please don't get the hump , but I am using "civillised" pejoratively there in reference to the USA where the entire criminal justice system is completely dysfunctinal imho, killers walk free while the most trivial acts of civil tort or punished by life sentences)

    Crimes against the person can be very complicated however. If those girls had attacked their victim and the video was made by passer by, then the crime depicted would be a lesser one than if the girls had decided "lets go and batter this person and film it so we can post it on the internet". It adds to the premeditated factor of the crime, to the evidence of intent.

    But, as you seem to say, the act of the assault should be treated differently from the act of publishing the video knowing that it would cause humiliation to the victim (which is ironic because the only people really being humiliated are the attackers). That should be treated as separate crime, which may carry an additional sentence.

    link to this | view in thread ]

  23. identicon
    danimal, 26 Jan 2007 @ 8:05pm

    I think we agree...almost

    @misanthropic humanist

    I think we're in agreement that videoing a crime in progress for display purposes should have additional consequences. I think that is a universally granted premise.

    My beef is less with the intent of the bill and more with the fact that it is redundant. Judges and juries are allowed now to weigh the totality of the evidence including any external factors such as motivation, circumstances and premeditation. In some cases, crimes are already classified by those and other factors.

    As you stated, judges already have "equity" - so let's let the justice system do its job without having to needlessly bog it down.

    "Please don't get the hump , but I am using "civillised" pejoratively there in reference to the USA where the entire criminal justice system is completely dysfunctinal imho, killers walk free while the most trivial acts of civil tort or punished by life sentences"

    I tend to agree, though I think the international community gets a fairly distorted and sensationalized view based on high-profile cases or the "exceptional". However, I think that as we continue to segment to add over-reaching laws like this it just gets to be more and more of a mess. You think our criminal justice system is screwed - I'll send you our current tax code!

    The underlying problem is legislators who need to continuously grab headlines and warm the fuzzies of a constituency so they can continue to get re-elected, so instead of taking on any major problems its much easier to work one week a month and plick off a small one that gets you in the papers. In lieu of that, politicians will bolster about a big problem to serve their base knowing it will never get anywhere - again only in the interest of self-preservation.

    Okay, got a little off-track there. I guess my closing argument would be that I think it foolish to add a whole new classification or degree for a type of crime that already has several (assault), further erode the autonomy of our justice system to actually see that the "punishment fits the crime" (see drug-use and mandatory minimums) and feed the re-election campaign of some douchebag politician who I'm sure could be using his time more constructively to tackle some of the REAL problems this country is facing.

    Thanks for the spirited debate - it was very insightful.

    link to this | view in thread ]

  24. identicon
    Anonymous Coward, 26 Jan 2007 @ 9:31pm

    Re: The **AA

    This has already happened, more because the song was playing in the background, but it's all the same to me.

    link to this | view in thread ]

  25. identicon
    misanthropic humanist, 27 Jan 2007 @ 2:48am

    Okay, got a little off-track there. I guess my closing argument would be that I think it foolish to add a whole new classification or degree for a type of crime that already has several (assault)...

    Maybe you didn't get off track. Maybe I misunderstood the story. I took it that the politico in question was advocating a closer look at sentencing rather than proposing new legislation.

    Then you are quite correct. This clearly doesn't need new laws to deal with, good Lord, we've all got enough of that already. It just needs a wider awareness of the issues.

    link to this | view in thread ]

  26. identicon
    Really, 29 Jan 2007 @ 3:54pm

    politicians

    I think all politicians should have to take an IQ test before they get elected. Oh sure, many can talk the talk but that's as deep as it goes.

    link to this | view in thread ]

  27. identicon
    Anonymous Coward, 29 Jan 2007 @ 4:51pm

    No, danimal, you are not going to send anyone the US tax code, even just federal taxes, because if you can afford to send either a parcel containing the texts or the bandwithd to send it inside a year in any format you choose, you will be under investigation for breaking the said tax code.

    THe problem is that the perps should be sued after, rather than sentenced for longer. Maybe if people who are convicted of crimes should have to be jailed not for a certain amount of time, but rather for a certain amount of money. They are leant money for compensation by the victims, and when they have earnt enough money to pay back the compensation plus interest, they just have to work off the cost of thier jail time. This might help victims. For murder, the person would have to have the compensation set at the value of the persons life insurance. THis is obviously impractical for some crimes, such as espionage, but it could be used for some crimes.
    If you tink this sounds a bit easy on the crooks, think how much/little fun it would be to make roads or whatever on the minimum wage, and live in jail.

    link to this | view in thread ]

  28. identicon
    Anonymous Coward, 6 Dec 2007 @ 1:57pm

    u suck u dont do shit about it

    link to this | view in thread ]

  29. identicon
    melodie, 26 Feb 2008 @ 10:52am

    thats just wrong

    link to this | view in thread ]

  30. identicon
    lolz man, 20 Mar 2008 @ 7:48am

    haz

    The scientific study of earthquakes is comparatively new. Until the 18th century, few factual descriptions of earthquakes were recorded, and the natural cause of earthquakes was little understood. Those who did look for natural causes often reached conclusions that seem fanciful today; one popular theory was that earthquakes were caused by air rushing out of caverns deep in the Earth's interior.

    The earliest earthquake for which we have descriptive information occurred in China in 1177 B.C. The Chinese earthquake catalog describes several dozen large earthquakes in China during the next few thousand years. Earthquakes in Europe are mentioned as early as 580 B.C., but the earliest for which we have some descriptive information occurred in the mid-16th century. The earliest known earthquakes in the Americas were in Mexico in the late 14th century and in Peru in 1471, but descriptions of the effects were not well documented. By the 17th century, descriptions of the effects of earthquakes were being published around the world - although these accounts were often exaggerated or distorted.

    The most widely felt earthquakes in the recorded history of North America were a series that occurred in 1811-1812 near New Madrid, Missouri. A great earthquake, whose magnitude is estimated to be about 8, occurred on the morning of December 16, 1811. Another great earthquake occurred on January 23, 1812, and a third, the strongest yet, on February 7, 1812. Aftershocks were nearly continuous between these great earthquakes and continued for months afterwards. These earthquakes were felt by people as far away as Boston and Denver. Because the most intense effects were in a sparsely populated region, the destruction of human life and property was slight. If just one of these enormous earthquakes occurred in the same area today, millions of people and buildings and other structures worth billions of dollars would be affected.

    The San Francisco earthquakes of 1906 was one of the most destructive in the recorded history of North America - the earthquake and the fire that followed killed nearly 700 people and left the city in ruins.

    The Earth is formed of several layers that have very different physical and chemical properties. The outer layer, which averages about 70 kilometers in thickness, consists of about a dozen large, irregularly shaped plates that slide over, under and past each other on top of the partly molten inner layer. Most earthquakes occur at the boundaries where the plates meet. In fact, the locations of earthquakes and the kinds of ruptures they produce help scientists define the plate boundaries.
    There are three types of plate boundaries: spreading zones, transform faults, and subduction zones. At spreading zones, molten rock rises, pushing two plates apart and adding new material at their edges. Most spreading zones are found in oceans; for example, the North American and Eurasian plates are spreading apart along the mid-Atlantic ridge. Spreading zones usually have earthquakes at shallow depths (within 30 kilometers of the surface).


    Illustration of Plate Boundary Types - 95k

    Transform faults are found where plates slide past one another. An example of a transform-fault plate boundary is the San Andreas fault, along the coast of California and northwestern Mexico. Earthquakes at transform faults tend to occur at shallow depths and form fairly straight linear patterns.

    Subduction zones are found where one plate overrides, or subducts, another, pushing it downward into the mantle where it melts. An example of a subduction-zone plate boundary is found along the northwest coast of the United States, western Canada, and southern Alaska and the Aleutian Islands. Subduction zones are characterized by deep-ocean trenches, shallow to deep earthquakes, and mountain ranges containing active volcanoes.


    Map of the Tectonic Plates - 67k

    Earthquakes can also occur within plates, although plate-boundary earthquakes are much more common. Less than 10 percent of all earthquakes occur within plate interiors. As plates continue to move and plate boundaries change over geologic time, weakened boundary regions become part of the interiors of the plates. These zones of weakness within the continents can cause earthquakes in response to stresses that originate at the edges of the plate or in the deeper crust. The New Madrid earthquakes of 1811-1812 and the 1886 Charleston earthquake occurred within the North American plate.

    An earthquake is the vibration, sometimes violent, of the Earth's surface that follows a release of energy in the Earth's crust. This energy can be generated by a sudden dislocation of segments of the crust, by a volcanic eruption, or event by manmade explosions. Most destructive quakes, however, are caused by dislocations of the crust. The crust may first bend and then, when the stress exceeds the strength of the rocks, break and "snap" to a new position. In the process of breaking, vibrations called "seismic waves" are generated. These waves travel outward from the source of the earthquake along the surface and through the Earth at varying speeds depending on the material through which they move. Some of the vibrations are of high enough frequency to be audible, while others are of very low frequency. These vibrations cause the entire planet to quiver or ring like a bell or tuning fork.

    A fault is a fracture in the Earth's crust along which two blocks of the crust have slipped with respect to each other. Faults are divided into three main groups, depending on how they move. Normal faults occur in response to pulling or tension; the overlying block moves down the dip of the fault plane. Thrust (reverse) faults occur in response to squeezing or compression; the overlying block moves up the dip of the fault plane. Strike-slip (lateral) faults occur in response to either type of stress; the blocks move horizontally past one another. Most faulting along spreading zones is normal, along subduction zones is thrust, and along transform faults is strike-slip.

    Geologists have found that earthquakes tend to reoccur along faults, which reflect zones of weakness in the Earth's crust. Even if a fault zone has recently experienced an earthquake, however, there is no guarantee that all the stress has been relieved. Another earthquake could still occur. In New Madrid, a great earthquake was followed by a large aftershock within 6 hours on December 6, 1811. Furthermore, relieving stress along one part of the fault may increase stress in another part; the New Madrid earthquakes in January and February 1812 may have resulted from this phenomenon.

    The focal depth of an earthquake is the depth from the Earth's surface to the region where an earthquake's energy originates (the focus). Earthquakes with focal depths from the surface to about 70 kilometers (43.5 miles) are classified as shallow. Earthquakes with focal depths from 70 to 300 kilometers (43.5 to 186 miles) are classified as intermediate. The focus of deep earthquakes may reach depths of more than 700 kilometers (435 miles). The focuses of most earthquakes are concentrated in the crust and upper mantle. The depth to the center of the Earth's core is about 6,370 kilometers (3,960 miles), so event the deepest earthquakes originate in relatively shallow parts of the Earth's interior.

    The epicenter of an earthquake is the point on the Earth's surface directly above the focus. The location of an earthquake is commonly described by the geographic position of its epicenter and by its focal depth.

    Earthquakes beneath the ocean floor sometimes generate immense sea waves or tsunamis (Japan's dread "huge wave"). These waves travel across the ocean at speeds as great as 960 kilometers per hour (597 miles per hour) and may be 15 meters (49 feet) high or higher by the time they reach the shore. During the 1964 Alaskan earthquake, tsunamis engulfing coastal areas caused most of the destruction at Kodiak, Cordova, and Seward and caused severe damage along the west coast of North America, particularly at Crescent City, California. Some waves raced across the ocean to the coasts of Japan.

    Liquefaction, which happens when loosely packed, water-logged sediments lose their strength in response to strong shaking, causes major damage during earthquakes. During the 1989 Loma Prieta earthquake, liquefaction of the soils and debris used to fill in a lagoon caused major subsidence, fracturing, and horizontal sliding of the ground surface in the Marina district in San Francisco.

    Landslides triggered by earthquakes often cause more destruction than the earthquakes themselves. During the 1964 Alaska quake, shock-induced landslides devastated the Turnagain Heights residential development and many downtown areas in Anchorage. An observer gave a vivid report of the breakup of the unstable earth materials in the Turnagain Heights region: I got out of my car, ran northward toward my driveway, and then saw that the bluff had broken back approximately 300 feet southward from its original edge. Additional slumping of the bluff caused me to return to my car and back southward approximately 180 feet to the corner of McCollie and Turnagain Parkway. The bluff slowly broke until the corner of Turnagain Parkway and McCollie had slumped northward.

    The vibrations produced by earthquakes are detected, recorded, and measured by instruments call seismographs. The zig-zag line made by a seismograph, called a "seismogram," reflects the changing intensity of the vibrations by responding to the motion of the ground surface beneath the instrument. From the data expressed in seismograms, scientists can determine the time, the epicenter, the focal depth, and the type of faulting of an earthquake and can estimate how much energy was released.





    The two general types of vibrations produced by earthquakes are surface waves, which travel along the Earth's surface, and body waves, which travel through the Earth. Surface waves usually have the strongest vibrations and probably cause most of the damage done by earthquakes.

    Body waves are of two types, compressional and shear. Both types pass through the Earth's interior from the focus of an earthquake to distant points on the surface, but only compressional waves travel through the Earth's molten core. Because compressional waves travel at great speeds and ordinarily reach the surface first, they are often called "primary waves" or simply "P" waves. P waves push tiny particles of Earth material directly ahead of them or displace the particles directly behind their line of travel.

    Shear waves do not travel as rapidly through the Earth's crust and mantle as do compressional waves, and because they ordinarily reach the surface later, they are called "secondary" or "S" waves. Instead of affecting material directly behind or ahead of their line of travel, shear waves displace material at right angles to their path and therefore sometimes called "transverse" waves.

    The first indication of an earthquake is often a sharp thud, signaling the arrival of compressional waves. This is followed by the shear waves and then the "ground roll" caused by the surface waves. A geologist who was at Valdez, Alaska, during the 1964 earthquake described this sequence: The first tremors were hard enough to stop a moving person, and shock waves were immediately noticeable on the surface of the ground. These shock waves continued with a rather long frequency, which gave the observer an impression of a rolling feeling rather than abrupt hard jolts. After about 1 minute the amplitude or strength of the shock waves increased in intensity and failures in buildings as well as the frozen ground surface began to occur ... After about 3 1/2 minutes the severe shock waves ended and people began to react as could be expected.

    The severity of an earthquake can be expressed in several ways. The magnitude of an earthquake, usually expressed by the Richter Scale, is a measure of the amplitude of the seismic waves. The moment magnitude of an earthquake is a measure of the amount of energy released - an amount that can be estimated from seismograph readings. The intensity, as expressed by the Modified Mercalli Scale, is a subjective measure that describes how strong a shock was felt at a particular location.

    The Richter Scale, named after Dr. Charles F. Richter of the California Institute of Technology, is the best known scale for measuring the magnitude of earthquakes. The scale is logarithmic so that a recording of 7, for example, indicates a disturbance with ground motion 10 times as large as a recording of 6. A quake of magnitude 2 is the smallest quake normally felt by people. Earthquakes with a Richter value of 6 or more are commonly considered major; great earthquakes have magnitude of 8 or more on the Richter scale.

    The Modified Mercalli Scale expresses the intensity of an earthquake's effects in a given locality in values ranging from I to XII. The most commonly used adaptation covers the range of intensity from the condition of "I -- Not felt except by a very few under especially favorable conditions," to "XII -- Damage total. Lines of sight and level are distorted. Objects thrown upward into the air." Evaluation of earthquake intensity can be made only after eyewitness reports and results of field investigations are studied and interpreted. The maximum intensity experienced in the Alaska earthquake of 1964 was X; damage from the San Francisco and New Madrid earthquakes reached a maximum intensity of XI.

    Earthquakes of large magnitude do not necessarily cause the most intense surface effects. The effect in a given region depends to a large degree on local surface and subsurface geologic conditions. An area underlain by unstable ground (sand, clay, or other unconsolidated materials), for example, is likely to experience much more noticeable effects than an area equally distant from an earthquake's epicenter but underlain by firm ground such as granite. In general, earthquakes east of the Rocky Mountains affect a much larger area than earthquakes west of the Rockies.

    An earthquake's destructiveness depends on many factors. In addition to magnitude and the local geologic conditions, these factors include the focal depth, the distance from the epicenter, and the design of buildings and other structures. The extent of damage also depends on the density of population and construction in the area shaken by the quake.

    The Loma Prieta earthquake of 1989 demonstrated a wide range of effects. The Santa Cruz mountains suffered little damage from the seismic waves, even though they were close to the epicenter. The central core of the city of Santa Cruz, about 24 kilometers (15 miles) away from the epicenter, was almost competely destroyed. More than 80 kilometers (50 miles) away, the cities of San Francisco and Oakland suffered selective but severe damage, including the loss of more than 40 lives. The greatest destruction occurred in areas where roads and elevated structures were built on stable ground underlain by loose, unconsolidated soils.

    The Northridge, California, earthquake of 1994 also produced a wide variety of effects, even over distances of just a few hundred meters. Some buildings collapsed, while adjacent buildings of similar age and construction remained standing. Similarly, some highway spans collapsed, while others nearby did not.

    Earthquakes are associated with volcanic eruptions. Abrupt increases in earthquake activity heralded eruptions at Mount St. Helens, Washington; Mount Spurr and Redoubt Volcano, Alaska; and Kilauea and Mauna Loa, Hawaii.



    A sudden increase in earthquake tremors signaled the beginning of eruptions at Redoubt Volcano in 1989-90. Full Size Image - 228k
    The location and movement of swarms of tremors indicate the movement of magma through the volcano. Continuous records of seismic and tiltmeter (a device that measures ground tilting) data are maintained at U.S. Geological Survey volcano observatories in Hawaii, Alaska, California, and the Cascades, where study of these records enables specialists to make short-range predictions of volcanic eruptions. These warnings have been especially effective in Alaska, where the imminent eruption of a volcano requires the rerouting of international air traffic to enable airplanes to avoid volcanic clouds. Since 1982, at least seven jumbo jets, carrying more than 1,500 passengers, have lost power in the air after flying into clouds of volcanic ash. Though all flights were able to restart their engines eventually and no lives were lost, the aircraft suffered damages of tens of millions of dollars. As a result of these close calls, an international team of volcanologists, meteorologists, dispatchers, pilots, and controllers have begun to work together to alert each other to imminent volcanic eruptions and to detect and track volcanic ash clouds.

    The goal of earthquake prediction is to give warning of potentially damaging earthquakes early enough to allow appropriate response to the disaster, enabling people to minimize loss of life and property. The U.S. Geological Survey conducts and supports research on the likelihood of future earthquakes. This research includes field, laboratory, and theoretical investigations of earthquake mechanisms and fault zones. A primary goal of earthquake research is to increase the reliability of earthquake probability estimates. Ultimately, scientists would like to be able to specify a high probability for a specific earthquake on a particular fault within a particular year. Scientists estimate earthquake probabilities in two ways: by studying the history of large earthquakes in a specific area and the rate at which strain accumulates in the rock.




    This time-exposure photograph of the electronic-laser, ground-motion movement system in operation at Parkfield, California, to track movement along the San Andreas fault. Full size image - 40 k


    Scientists study the past frequency of large earthquakes in order to determine the future likelihood of similar large shocks. For example, if a region has experienced four magnitude 7 or larger earthquakes during 200 years of recorded history, and if these shocks occurred randomly in time, then scientists would assign a 50 percent probability (that is, just as likely to happen as not to happen) to the occurrence of another magnitude 7 or larger quake in the region during the next 50 years.

    But in many places, the assumption of random occurrence with time may not be true, because when strain is released along one part of the fault system, it may actually increase on another part. Four magnitude 6.8 or larger earthquakes and many magnitude 6 - 6.5 shocks occurred in the San Francisco Bay region during the 75 years between 1836 and 1911. For the next 68 years (until 1979), no earthquakes of magnitude 6 or larger occurred in the region. Beginning with a magnitude 6.0 shock in 1979, the earthquake activity in the region increased dramatically; between 1979 and 1989, there were four magnitude 6 or greater earthquakes, including the magnitude 7.1 Loma Prieta earthquake. This clustering of earthquakes leads scientists to estimate that the probability of a magnitude 6.8 or larger earthquake occurring during the next 30 years in the San Francisco Bay region is about 67 percent (twice as likely as not).

    Another way to estimate the likelihood of future earthquakes is to study how fast strain accumulates. When plate movements build the strain in rocks to a critical level, like pulling a rubber band too tight, the rocks will suddenly break and slip to a new position. Scientists measure how much strain accumulates along a fault segment each year, how much time has passed since the last earthquake along the segment, and how much strain was released in the last earthquake. This information is then used to calculate the time required for the accumulating strain to build to the level that results in an earthquake. This simple model is complicated by the fact that such detailed information about faults is rare. In the United States, only the San Andreas fault system has adequate records for using this prediction method.

    Both of these methods, and a wide array of monitoring techniques, are being tested along part of the San Andres fault. For the past 150 years, earthquakes of about magnitude 6 have occurred an average of every 22 years on the San Andreas fault near Parkfield, California. The last shock was in 1966. Because of the consistency and similarity of these earthquakes, scientists have started an experiment to "capture" the next Parkfield earthquake. A dense web of monitoring instruments was deployed in the region during the late 1980s. The main goals of the ongoing Parkfield Earthquake Prediction Experiment are to record the geophysical signals before and after the expected earthquake; to issue a short-term prediction; and to develop effective methods of communication between earthquake scientists and community officials responsible for disaster response and mitigation. This project has already made important contributions to both earth science and public policy.

    Scientific understanding of earthquakes is of vital importance to the Nation. As the population increases, expanding urban development and construction works encroach upon areas susceptible to earthquakes. With a greater understanding of the causes and effects of earthquakes, we may be able to reduce damage and loss of life from this destructive phenomenon.

    The original "Earthquakes" publication is one of a series of general interest publications prepared by the U.S. Geological Survey to provide information about the earth sciences, natural resources, and the environment. To obtain a catalog of additional titles in the series "General Interest Publications of the U.S. Geological Survey," write:



    Print copies for sale by the:



    Last reprinted: 1995




    As the Nation's prinicipal conservation agency, the Department of the Interior has responsibility for most of our nationally owned public lands and natural and cultural resources. This includes fostering sound use of our land and water resources; protecting our fish, wildlife, and biological diversity; preserving the environment and cultural values of our national parks and historical places;and providing for the enjoyment of life through outdoor recreation. The Department assesses our energy and mineral resources and works to ensure that their development is in the best interestes of all our people by encouraging stewardship and citizen participatio in their care. The Department also has a major responisibility for Amercan Indian reservation communities and for people who live in island territories under U.S. administraion.


    bb lol

    link to this | view in thread ]

  31. identicon
    lolz man, 20 Mar 2008 @ 7:50am

    haz 2

    The scientific study of earthquakes is comparatively new. Until the 18th century, few factual descriptions of earthquakes were recorded, and the natural cause of earthquakes was little understood. Those who did look for natural causes often reached conclusions that seem fanciful today; one popular theory was that earthquakes were caused by air rushing out of caverns deep in the Earth's interior.

    The earliest earthquake for which we have descriptive information occurred in China in 1177 B.C. The Chinese earthquake catalog describes several dozen large earthquakes in China during the next few thousand years. Earthquakes in Europe are mentioned as early as 580 B.C., but the earliest for which we have some descriptive information occurred in the mid-16th century. The earliest known earthquakes in the Americas were in Mexico in the late 14th century and in Peru in 1471, but descriptions of the effects were not well documented. By the 17th century, descriptions of the effects of earthquakes were being published around the world - although these accounts were often exaggerated or distorted.

    The most widely felt earthquakes in the recorded history of North America were a series that occurred in 1811-1812 near New Madrid, Missouri. A great earthquake, whose magnitude is estimated to be about 8, occurred on the morning of December 16, 1811. Another great earthquake occurred on January 23, 1812, and a third, the strongest yet, on February 7, 1812. Aftershocks were nearly continuous between these great earthquakes and continued for months afterwards. These earthquakes were felt by people as far away as Boston and Denver. Because the most intense effects were in a sparsely populated region, the destruction of human life and property was slight. If just one of these enormous earthquakes occurred in the same area today, millions of people and buildings and other structures worth billions of dollars would be affected.

    The San Francisco earthquakes of 1906 was one of the most destructive in the recorded history of North America - the earthquake and the fire that followed killed nearly 700 people and left the city in ruins.

    The Earth is formed of several layers that have very different physical and chemical properties. The outer layer, which averages about 70 kilometers in thickness, consists of about a dozen large, irregularly shaped plates that slide over, under and past each other on top of the partly molten inner layer. Most earthquakes occur at the boundaries where the plates meet. In fact, the locations of earthquakes and the kinds of ruptures they produce help scientists define the plate boundaries.
    There are three types of plate boundaries: spreading zones, transform faults, and subduction zones. At spreading zones, molten rock rises, pushing two plates apart and adding new material at their edges. Most spreading zones are found in oceans; for example, the North American and Eurasian plates are spreading apart along the mid-Atlantic ridge. Spreading zones usually have earthquakes at shallow depths (within 30 kilometers of the surface).


    Illustration of Plate Boundary Types - 95k

    Transform faults are found where plates slide past one another. An example of a transform-fault plate boundary is the San Andreas fault, along the coast of California and northwestern Mexico. Earthquakes at transform faults tend to occur at shallow depths and form fairly straight linear patterns.

    Subduction zones are found where one plate overrides, or subducts, another, pushing it downward into the mantle where it melts. An example of a subduction-zone plate boundary is found along the northwest coast of the United States, western Canada, and southern Alaska and the Aleutian Islands. Subduction zones are characterized by deep-ocean trenches, shallow to deep earthquakes, and mountain ranges containing active volcanoes.


    Map of the Tectonic Plates - 67k

    Earthquakes can also occur within plates, although plate-boundary earthquakes are much more common. Less than 10 percent of all earthquakes occur within plate interiors. As plates continue to move and plate boundaries change over geologic time, weakened boundary regions become part of the interiors of the plates. These zones of weakness within the continents can cause earthquakes in response to stresses that originate at the edges of the plate or in the deeper crust. The New Madrid earthquakes of 1811-1812 and the 1886 Charleston earthquake occurred within the North American plate.

    An earthquake is the vibration, sometimes violent, of the Earth's surface that follows a release of energy in the Earth's crust. This energy can be generated by a sudden dislocation of segments of the crust, by a volcanic eruption, or event by manmade explosions. Most destructive quakes, however, are caused by dislocations of the crust. The crust may first bend and then, when the stress exceeds the strength of the rocks, break and "snap" to a new position. In the process of breaking, vibrations called "seismic waves" are generated. These waves travel outward from the source of the earthquake along the surface and through the Earth at varying speeds depending on the material through which they move. Some of the vibrations are of high enough frequency to be audible, while others are of very low frequency. These vibrations cause the entire planet to quiver or ring like a bell or tuning fork.

    A fault is a fracture in the Earth's crust along which two blocks of the crust have slipped with respect to each other. Faults are divided into three main groups, depending on how they move. Normal faults occur in response to pulling or tension; the overlying block moves down the dip of the fault plane. Thrust (reverse) faults occur in response to squeezing or compression; the overlying block moves up the dip of the fault plane. Strike-slip (lateral) faults occur in response to either type of stress; the blocks move horizontally past one another. Most faulting along spreading zones is normal, along subduction zones is thrust, and along transform faults is strike-slip.

    Geologists have found that earthquakes tend to reoccur along faults, which reflect zones of weakness in the Earth's crust. Even if a fault zone has recently experienced an earthquake, however, there is no guarantee that all the stress has been relieved. Another earthquake could still occur. In New Madrid, a great earthquake was followed by a large aftershock within 6 hours on December 6, 1811. Furthermore, relieving stress along one part of the fault may increase stress in another part; the New Madrid earthquakes in January and February 1812 may have resulted from this phenomenon.

    The focal depth of an earthquake is the depth from the Earth's surface to the region where an earthquake's energy originates (the focus). Earthquakes with focal depths from the surface to about 70 kilometers (43.5 miles) are classified as shallow. Earthquakes with focal depths from 70 to 300 kilometers (43.5 to 186 miles) are classified as intermediate. The focus of deep earthquakes may reach depths of more than 700 kilometers (435 miles). The focuses of most earthquakes are concentrated in the crust and upper mantle. The depth to the center of the Earth's core is about 6,370 kilometers (3,960 miles), so event the deepest earthquakes originate in relatively shallow parts of the Earth's interior.

    The epicenter of an earthquake is the point on the Earth's surface directly above the focus. The location of an earthquake is commonly described by the geographic position of its epicenter and by its focal depth.

    Earthquakes beneath the ocean floor sometimes generate immense sea waves or tsunamis (Japan's dread "huge wave"). These waves travel across the ocean at speeds as great as 960 kilometers per hour (597 miles per hour) and may be 15 meters (49 feet) high or higher by the time they reach the shore. During the 1964 Alaskan earthquake, tsunamis engulfing coastal areas caused most of the destruction at Kodiak, Cordova, and Seward and caused severe damage along the west coast of North America, particularly at Crescent City, California. Some waves raced across the ocean to the coasts of Japan.

    Liquefaction, which happens when loosely packed, water-logged sediments lose their strength in response to strong shaking, causes major damage during earthquakes. During the 1989 Loma Prieta earthquake, liquefaction of the soils and debris used to fill in a lagoon caused major subsidence, fracturing, and horizontal sliding of the ground surface in the Marina district in San Francisco.

    Landslides triggered by earthquakes often cause more destruction than the earthquakes themselves. During the 1964 Alaska quake, shock-induced landslides devastated the Turnagain Heights residential development and many downtown areas in Anchorage. An observer gave a vivid report of the breakup of the unstable earth materials in the Turnagain Heights region: I got out of my car, ran northward toward my driveway, and then saw that the bluff had broken back approximately 300 feet southward from its original edge. Additional slumping of the bluff caused me to return to my car and back southward approximately 180 feet to the corner of McCollie and Turnagain Parkway. The bluff slowly broke until the corner of Turnagain Parkway and McCollie had slumped northward.

    The vibrations produced by earthquakes are detected, recorded, and measured by instruments call seismographs. The zig-zag line made by a seismograph, called a "seismogram," reflects the changing intensity of the vibrations by responding to the motion of the ground surface beneath the instrument. From the data expressed in seismograms, scientists can determine the time, the epicenter, the focal depth, and the type of faulting of an earthquake and can estimate how much energy was released.





    The two general types of vibrations produced by earthquakes are surface waves, which travel along the Earth's surface, and body waves, which travel through the Earth. Surface waves usually have the strongest vibrations and probably cause most of the damage done by earthquakes.

    Body waves are of two types, compressional and shear. Both types pass through the Earth's interior from the focus of an earthquake to distant points on the surface, but only compressional waves travel through the Earth's molten core. Because compressional waves travel at great speeds and ordinarily reach the surface first, they are often called "primary waves" or simply "P" waves. P waves push tiny particles of Earth material directly ahead of them or displace the particles directly behind their line of travel.

    Shear waves do not travel as rapidly through the Earth's crust and mantle as do compressional waves, and because they ordinarily reach the surface later, they are called "secondary" or "S" waves. Instead of affecting material directly behind or ahead of their line of travel, shear waves displace material at right angles to their path and therefore sometimes called "transverse" waves.

    The first indication of an earthquake is often a sharp thud, signaling the arrival of compressional waves. This is followed by the shear waves and then the "ground roll" caused by the surface waves. A geologist who was at Valdez, Alaska, during the 1964 earthquake described this sequence: The first tremors were hard enough to stop a moving person, and shock waves were immediately noticeable on the surface of the ground. These shock waves continued with a rather long frequency, which gave the observer an impression of a rolling feeling rather than abrupt hard jolts. After about 1 minute the amplitude or strength of the shock waves increased in intensity and failures in buildings as well as the frozen ground surface began to occur ... After about 3 1/2 minutes the severe shock waves ended and people began to react as could be expected.

    The severity of an earthquake can be expressed in several ways. The magnitude of an earthquake, usually expressed by the Richter Scale, is a measure of the amplitude of the seismic waves. The moment magnitude of an earthquake is a measure of the amount of energy released - an amount that can be estimated from seismograph readings. The intensity, as expressed by the Modified Mercalli Scale, is a subjective measure that describes how strong a shock was felt at a particular location.

    The Richter Scale, named after Dr. Charles F. Richter of the California Institute of Technology, is the best known scale for measuring the magnitude of earthquakes. The scale is logarithmic so that a recording of 7, for example, indicates a disturbance with ground motion 10 times as large as a recording of 6. A quake of magnitude 2 is the smallest quake normally felt by people. Earthquakes with a Richter value of 6 or more are commonly considered major; great earthquakes have magnitude of 8 or more on the Richter scale.

    The Modified Mercalli Scale expresses the intensity of an earthquake's effects in a given locality in values ranging from I to XII. The most commonly used adaptation covers the range of intensity from the condition of "I -- Not felt except by a very few under especially favorable conditions," to "XII -- Damage total. Lines of sight and level are distorted. Objects thrown upward into the air." Evaluation of earthquake intensity can be made only after eyewitness reports and results of field investigations are studied and interpreted. The maximum intensity experienced in the Alaska earthquake of 1964 was X; damage from the San Francisco and New Madrid earthquakes reached a maximum intensity of XI.

    Earthquakes of large magnitude do not necessarily cause the most intense surface effects. The effect in a given region depends to a large degree on local surface and subsurface geologic conditions. An area underlain by unstable ground (sand, clay, or other unconsolidated materials), for example, is likely to experience much more noticeable effects than an area equally distant from an earthquake's epicenter but underlain by firm ground such as granite. In general, earthquakes east of the Rocky Mountains affect a much larger area than earthquakes west of the Rockies.

    An earthquake's destructiveness depends on many factors. In addition to magnitude and the local geologic conditions, these factors include the focal depth, the distance from the epicenter, and the design of buildings and other structures. The extent of damage also depends on the density of population and construction in the area shaken by the quake.

    The Loma Prieta earthquake of 1989 demonstrated a wide range of effects. The Santa Cruz mountains suffered little damage from the seismic waves, even though they were close to the epicenter. The central core of the city of Santa Cruz, about 24 kilometers (15 miles) away from the epicenter, was almost competely destroyed. More than 80 kilometers (50 miles) away, the cities of San Francisco and Oakland suffered selective but severe damage, including the loss of more than 40 lives. The greatest destruction occurred in areas where roads and elevated structures were built on stable ground underlain by loose, unconsolidated soils.

    The Northridge, California, earthquake of 1994 also produced a wide variety of effects, even over distances of just a few hundred meters. Some buildings collapsed, while adjacent buildings of similar age and construction remained standing. Similarly, some highway spans collapsed, while others nearby did not.

    Earthquakes are associated with volcanic eruptions. Abrupt increases in earthquake activity heralded eruptions at Mount St. Helens, Washington; Mount Spurr and Redoubt Volcano, Alaska; and Kilauea and Mauna Loa, Hawaii.



    A sudden increase in earthquake tremors signaled the beginning of eruptions at Redoubt Volcano in 1989-90. Full Size Image - 228k
    The location and movement of swarms of tremors indicate the movement of magma through the volcano. Continuous records of seismic and tiltmeter (a device that measures ground tilting) data are maintained at U.S. Geological Survey volcano observatories in Hawaii, Alaska, California, and the Cascades, where study of these records enables specialists to make short-range predictions of volcanic eruptions. These warnings have been especially effective in Alaska, where the imminent eruption of a volcano requires the rerouting of international air traffic to enable airplanes to avoid volcanic clouds. Since 1982, at least seven jumbo jets, carrying more than 1,500 passengers, have lost power in the air after flying into clouds of volcanic ash. Though all flights were able to restart their engines eventually and no lives were lost, the aircraft suffered damages of tens of millions of dollars. As a result of these close calls, an international team of volcanologists, meteorologists, dispatchers, pilots, and controllers have begun to work together to alert each other to imminent volcanic eruptions and to detect and track volcanic ash clouds.

    The goal of earthquake prediction is to give warning of potentially damaging earthquakes early enough to allow appropriate response to the disaster, enabling people to minimize loss of life and property. The U.S. Geological Survey conducts and supports research on the likelihood of future earthquakes. This research includes field, laboratory, and theoretical investigations of earthquake mechanisms and fault zones. A primary goal of earthquake research is to increase the reliability of earthquake probability estimates. Ultimately, scientists would like to be able to specify a high probability for a specific earthquake on a particular fault within a particular year. Scientists estimate earthquake probabilities in two ways: by studying the history of large earthquakes in a specific area and the rate at which strain accumulates in the rock.




    This time-exposure photograph of the electronic-laser, ground-motion movement system in operation at Parkfield, California, to track movement along the San Andreas fault. Full size image - 40 k


    Scientists study the past frequency of large earthquakes in order to determine the future likelihood of similar large shocks. For example, if a region has experienced four magnitude 7 or larger earthquakes during 200 years of recorded history, and if these shocks occurred randomly in time, then scientists would assign a 50 percent probability (that is, just as likely to happen as not to happen) to the occurrence of another magnitude 7 or larger quake in the region during the next 50 years.

    But in many places, the assumption of random occurrence with time may not be true, because when strain is released along one part of the fault system, it may actually increase on another part. Four magnitude 6.8 or larger earthquakes and many magnitude 6 - 6.5 shocks occurred in the San Francisco Bay region during the 75 years between 1836 and 1911. For the next 68 years (until 1979), no earthquakes of magnitude 6 or larger occurred in the region. Beginning with a magnitude 6.0 shock in 1979, the earthquake activity in the region increased dramatically; between 1979 and 1989, there were four magnitude 6 or greater earthquakes, including the magnitude 7.1 Loma Prieta earthquake. This clustering of earthquakes leads scientists to estimate that the probability of a magnitude 6.8 or larger earthquake occurring during the next 30 years in the San Francisco Bay region is about 67 percent (twice as likely as not).

    Another way to estimate the likelihood of future earthquakes is to study how fast strain accumulates. When plate movements build the strain in rocks to a critical level, like pulling a rubber band too tight, the rocks will suddenly break and slip to a new position. Scientists measure how much strain accumulates along a fault segment each year, how much time has passed since the last earthquake along the segment, and how much strain was released in the last earthquake. This information is then used to calculate the time required for the accumulating strain to build to the level that results in an earthquake. This simple model is complicated by the fact that such detailed information about faults is rare. In the United States, only the San Andreas fault system has adequate records for using this prediction method.

    Both of these methods, and a wide array of monitoring techniques, are being tested along part of the San Andres fault. For the past 150 years, earthquakes of about magnitude 6 have occurred an average of every 22 years on the San Andreas fault near Parkfield, California. The last shock was in 1966. Because of the consistency and similarity of these earthquakes, scientists have started an experiment to "capture" the next Parkfield earthquake. A dense web of monitoring instruments was deployed in the region during the late 1980s. The main goals of the ongoing Parkfield Earthquake Prediction Experiment are to record the geophysical signals before and after the expected earthquake; to issue a short-term prediction; and to develop effective methods of communication between earthquake scientists and community officials responsible for disaster response and mitigation. This project has already made important contributions to both earth science and public policy.

    Scientific understanding of earthquakes is of vital importance to the Nation. As the population increases, expanding urban development and construction works encroach upon areas susceptible to earthquakes. With a greater understanding of the causes and effects of earthquakes, we may be able to reduce damage and loss of life from this destructive phenomenon.

    The original "Earthquakes" publication is one of a series of general interest publications prepared by the U.S. Geological Survey to provide information about the earth sciences, natural resources, and the environment. To obtain a catalog of additional titles in the series "General Interest Publications of the U.S. Geological Survey," write:



    Print copies for sale by the:



    Last reprinted: 1995




    As the Nation's prinicipal conservation agency, the Department of the Interior has responsibility for most of our nationally owned public lands and natural and cultural resources. This includes fostering sound use of our land and water resources; protecting our fish, wildlife, and biological diversity; preserving the environment and cultural values of our national parks and historical places;and providing for the enjoyment of life through outdoor recreation. The Department assesses our energy and mineral resources and works to ensure that their development is in the best interestes of all our people by encouraging stewardship and citizen participatio in their care. The Department also has a major responisibility for Amercan Indian reservation communities and for people who live in island territories under U.S. administraion.

    link to this | view in thread ]

  32. identicon
    lolz man, 20 Mar 2008 @ 7:52am

    haz 3

    The scientific study of earthquakes is comparatively new. Until the 18th century, few factual descriptions of earthquakes were recorded, and the natural cause of earthquakes was little understood. Those who did look for natural causes often reached conclusions that seem fanciful today; one popular theory was that earthquakes were caused by air rushing out of caverns deep in the Earth's interior.

    The earliest earthquake for which we have descriptive information occurred in China in 1177 B.C. The Chinese earthquake catalog describes several dozen large earthquakes in China during the next few thousand years. Earthquakes in Europe are mentioned as early as 580 B.C., but the earliest for which we have some descriptive information occurred in the mid-16th century. The earliest known earthquakes in the Americas were in Mexico in the late 14th century and in Peru in 1471, but descriptions of the effects were not well documented. By the 17th century, descriptions of the effects of earthquakes were being published around the world - although these accounts were often exaggerated or distorted.

    The most widely felt earthquakes in the recorded history of North America were a series that occurred in 1811-1812 near New Madrid, Missouri. A great earthquake, whose magnitude is estimated to be about 8, occurred on the morning of December 16, 1811. Another great earthquake occurred on January 23, 1812, and a third, the strongest yet, on February 7, 1812. Aftershocks were nearly continuous between these great earthquakes and continued for months afterwards. These earthquakes were felt by people as far away as Boston and Denver. Because the most intense effects were in a sparsely populated region, the destruction of human life and property was slight. If just one of these enormous earthquakes occurred in the same area today, millions of people and buildings and other structures worth billions of dollars would be affected.

    The San Francisco earthquakes of 1906 was one of the most destructive in the recorded history of North America - the earthquake and the fire that followed killed nearly 700 people and left the city in ruins.

    The Earth is formed of several layers that have very different physical and chemical properties. The outer layer, which averages about 70 kilometers in thickness, consists of about a dozen large, irregularly shaped plates that slide over, under and past each other on top of the partly molten inner layer. Most earthquakes occur at the boundaries where the plates meet. In fact, the locations of earthquakes and the kinds of ruptures they produce help scientists define the plate boundaries.
    There are three types of plate boundaries: spreading zones, transform faults, and subduction zones. At spreading zones, molten rock rises, pushing two plates apart and adding new material at their edges. Most spreading zones are found in oceans; for example, the North American and Eurasian plates are spreading apart along the mid-Atlantic ridge. Spreading zones usually have earthquakes at shallow depths (within 30 kilometers of the surface).


    Illustration of Plate Boundary Types - 95k

    Transform faults are found where plates slide past one another. An example of a transform-fault plate boundary is the San Andreas fault, along the coast of California and northwestern Mexico. Earthquakes at transform faults tend to occur at shallow depths and form fairly straight linear patterns.

    Subduction zones are found where one plate overrides, or subducts, another, pushing it downward into the mantle where it melts. An example of a subduction-zone plate boundary is found along the northwest coast of the United States, western Canada, and southern Alaska and the Aleutian Islands. Subduction zones are characterized by deep-ocean trenches, shallow to deep earthquakes, and mountain ranges containing active volcanoes.


    Map of the Tectonic Plates - 67k

    Earthquakes can also occur within plates, although plate-boundary earthquakes are much more common. Less than 10 percent of all earthquakes occur within plate interiors. As plates continue to move and plate boundaries change over geologic time, weakened boundary regions become part of the interiors of the plates. These zones of weakness within the continents can cause earthquakes in response to stresses that originate at the edges of the plate or in the deeper crust. The New Madrid earthquakes of 1811-1812 and the 1886 Charleston earthquake occurred within the North American plate.

    An earthquake is the vibration, sometimes violent, of the Earth's surface that follows a release of energy in the Earth's crust. This energy can be generated by a sudden dislocation of segments of the crust, by a volcanic eruption, or event by manmade explosions. Most destructive quakes, however, are caused by dislocations of the crust. The crust may first bend and then, when the stress exceeds the strength of the rocks, break and "snap" to a new position. In the process of breaking, vibrations called "seismic waves" are generated. These waves travel outward from the source of the earthquake along the surface and through the Earth at varying speeds depending on the material through which they move. Some of the vibrations are of high enough frequency to be audible, while others are of very low frequency. These vibrations cause the entire planet to quiver or ring like a bell or tuning fork.

    A fault is a fracture in the Earth's crust along which two blocks of the crust have slipped with respect to each other. Faults are divided into three main groups, depending on how they move. Normal faults occur in response to pulling or tension; the overlying block moves down the dip of the fault plane. Thrust (reverse) faults occur in response to squeezing or compression; the overlying block moves up the dip of the fault plane. Strike-slip (lateral) faults occur in response to either type of stress; the blocks move horizontally past one another. Most faulting along spreading zones is normal, along subduction zones is thrust, and along transform faults is strike-slip.

    Geologists have found that earthquakes tend to reoccur along faults, which reflect zones of weakness in the Earth's crust. Even if a fault zone has recently experienced an earthquake, however, there is no guarantee that all the stress has been relieved. Another earthquake could still occur. In New Madrid, a great earthquake was followed by a large aftershock within 6 hours on December 6, 1811. Furthermore, relieving stress along one part of the fault may increase stress in another part; the New Madrid earthquakes in January and February 1812 may have resulted from this phenomenon.

    The focal depth of an earthquake is the depth from the Earth's surface to the region where an earthquake's energy originates (the focus). Earthquakes with focal depths from the surface to about 70 kilometers (43.5 miles) are classified as shallow. Earthquakes with focal depths from 70 to 300 kilometers (43.5 to 186 miles) are classified as intermediate. The focus of deep earthquakes may reach depths of more than 700 kilometers (435 miles). The focuses of most earthquakes are concentrated in the crust and upper mantle. The depth to the center of the Earth's core is about 6,370 kilometers (3,960 miles), so event the deepest earthquakes originate in relatively shallow parts of the Earth's interior.

    The epicenter of an earthquake is the point on the Earth's surface directly above the focus. The location of an earthquake is commonly described by the geographic position of its epicenter and by its focal depth.

    Earthquakes beneath the ocean floor sometimes generate immense sea waves or tsunamis (Japan's dread "huge wave"). These waves travel across the ocean at speeds as great as 960 kilometers per hour (597 miles per hour) and may be 15 meters (49 feet) high or higher by the time they reach the shore. During the 1964 Alaskan earthquake, tsunamis engulfing coastal areas caused most of the destruction at Kodiak, Cordova, and Seward and caused severe damage along the west coast of North America, particularly at Crescent City, California. Some waves raced across the ocean to the coasts of Japan.

    Liquefaction, which happens when loosely packed, water-logged sediments lose their strength in response to strong shaking, causes major damage during earthquakes. During the 1989 Loma Prieta earthquake, liquefaction of the soils and debris used to fill in a lagoon caused major subsidence, fracturing, and horizontal sliding of the ground surface in the Marina district in San Francisco.

    Landslides triggered by earthquakes often cause more destruction than the earthquakes themselves. During the 1964 Alaska quake, shock-induced landslides devastated the Turnagain Heights residential development and many downtown areas in Anchorage. An observer gave a vivid report of the breakup of the unstable earth materials in the Turnagain Heights region: I got out of my car, ran northward toward my driveway, and then saw that the bluff had broken back approximately 300 feet southward from its original edge. Additional slumping of the bluff caused me to return to my car and back southward approximately 180 feet to the corner of McCollie and Turnagain Parkway. The bluff slowly broke until the corner of Turnagain Parkway and McCollie had slumped northward.

    The vibrations produced by earthquakes are detected, recorded, and measured by instruments call seismographs. The zig-zag line made by a seismograph, called a "seismogram," reflects the changing intensity of the vibrations by responding to the motion of the ground surface beneath the instrument. From the data expressed in seismograms, scientists can determine the time, the epicenter, the focal depth, and the type of faulting of an earthquake and can estimate how much energy was released.





    The two general types of vibrations produced by earthquakes are surface waves, which travel along the Earth's surface, and body waves, which travel through the Earth. Surface waves usually have the strongest vibrations and probably cause most of the damage done by earthquakes.

    Body waves are of two types, compressional and shear. Both types pass through the Earth's interior from the focus of an earthquake to distant points on the surface, but only compressional waves travel through the Earth's molten core. Because compressional waves travel at great speeds and ordinarily reach the surface first, they are often called "primary waves" or simply "P" waves. P waves push tiny particles of Earth material directly ahead of them or displace the particles directly behind their line of travel.

    Shear waves do not travel as rapidly through the Earth's crust and mantle as do compressional waves, and because they ordinarily reach the surface later, they are called "secondary" or "S" waves. Instead of affecting material directly behind or ahead of their line of travel, shear waves displace material at right angles to their path and therefore sometimes called "transverse" waves.

    The first indication of an earthquake is often a sharp thud, signaling the arrival of compressional waves. This is followed by the shear waves and then the "ground roll" caused by the surface waves. A geologist who was at Valdez, Alaska, during the 1964 earthquake described this sequence: The first tremors were hard enough to stop a moving person, and shock waves were immediately noticeable on the surface of the ground. These shock waves continued with a rather long frequency, which gave the observer an impression of a rolling feeling rather than abrupt hard jolts. After about 1 minute the amplitude or strength of the shock waves increased in intensity and failures in buildings as well as the frozen ground surface began to occur ... After about 3 1/2 minutes the severe shock waves ended and people began to react as could be expected.

    The severity of an earthquake can be expressed in several ways. The magnitude of an earthquake, usually expressed by the Richter Scale, is a measure of the amplitude of the seismic waves. The moment magnitude of an earthquake is a measure of the amount of energy released - an amount that can be estimated from seismograph readings. The intensity, as expressed by the Modified Mercalli Scale, is a subjective measure that describes how strong a shock was felt at a particular location.

    The Richter Scale, named after Dr. Charles F. Richter of the California Institute of Technology, is the best known scale for measuring the magnitude of earthquakes. The scale is logarithmic so that a recording of 7, for example, indicates a disturbance with ground motion 10 times as large as a recording of 6. A quake of magnitude 2 is the smallest quake normally felt by people. Earthquakes with a Richter value of 6 or more are commonly considered major; great earthquakes have magnitude of 8 or more on the Richter scale.

    The Modified Mercalli Scale expresses the intensity of an earthquake's effects in a given locality in values ranging from I to XII. The most commonly used adaptation covers the range of intensity from the condition of "I -- Not felt except by a very few under especially favorable conditions," to "XII -- Damage total. Lines of sight and level are distorted. Objects thrown upward into the air." Evaluation of earthquake intensity can be made only after eyewitness reports and results of field investigations are studied and interpreted. The maximum intensity experienced in the Alaska earthquake of 1964 was X; damage from the San Francisco and New Madrid earthquakes reached a maximum intensity of XI.

    Earthquakes of large magnitude do not necessarily cause the most intense surface effects. The effect in a given region depends to a large degree on local surface and subsurface geologic conditions. An area underlain by unstable ground (sand, clay, or other unconsolidated materials), for example, is likely to experience much more noticeable effects than an area equally distant from an earthquake's epicenter but underlain by firm ground such as granite. In general, earthquakes east of the Rocky Mountains affect a much larger area than earthquakes west of the Rockies.

    An earthquake's destructiveness depends on many factors. In addition to magnitude and the local geologic conditions, these factors include the focal depth, the distance from the epicenter, and the design of buildings and other structures. The extent of damage also depends on the density of population and construction in the area shaken by the quake.

    The Loma Prieta earthquake of 1989 demonstrated a wide range of effects. The Santa Cruz mountains suffered little damage from the seismic waves, even though they were close to the epicenter. The central core of the city of Santa Cruz, about 24 kilometers (15 miles) away from the epicenter, was almost competely destroyed. More than 80 kilometers (50 miles) away, the cities of San Francisco and Oakland suffered selective but severe damage, including the loss of more than 40 lives. The greatest destruction occurred in areas where roads and elevated structures were built on stable ground underlain by loose, unconsolidated soils.

    The Northridge, California, earthquake of 1994 also produced a wide variety of effects, even over distances of just a few hundred meters. Some buildings collapsed, while adjacent buildings of similar age and construction remained standing. Similarly, some highway spans collapsed, while others nearby did not.

    Earthquakes are associated with volcanic eruptions. Abrupt increases in earthquake activity heralded eruptions at Mount St. Helens, Washington; Mount Spurr and Redoubt Volcano, Alaska; and Kilauea and Mauna Loa, Hawaii.



    A sudden increase in earthquake tremors signaled the beginning of eruptions at Redoubt Volcano in 1989-90. Full Size Image - 228k
    The location and movement of swarms of tremors indicate the movement of magma through the volcano. Continuous records of seismic and tiltmeter (a device that measures ground tilting) data are maintained at U.S. Geological Survey volcano observatories in Hawaii, Alaska, California, and the Cascades, where study of these records enables specialists to make short-range predictions of volcanic eruptions. These warnings have been especially effective in Alaska, where the imminent eruption of a volcano requires the rerouting of international air traffic to enable airplanes to avoid volcanic clouds. Since 1982, at least seven jumbo jets, carrying more than 1,500 passengers, have lost power in the air after flying into clouds of volcanic ash. Though all flights were able to restart their engines eventually and no lives were lost, the aircraft suffered damages of tens of millions of dollars. As a result of these close calls, an international team of volcanologists, meteorologists, dispatchers, pilots, and controllers have begun to work together to alert each other to imminent volcanic eruptions and to detect and track volcanic ash clouds.

    The goal of earthquake prediction is to give warning of potentially damaging earthquakes early enough to allow appropriate response to the disaster, enabling people to minimize loss of life and property. The U.S. Geological Survey conducts and supports research on the likelihood of future earthquakes. This research includes field, laboratory, and theoretical investigations of earthquake mechanisms and fault zones. A primary goal of earthquake research is to increase the reliability of earthquake probability estimates. Ultimately, scientists would like to be able to specify a high probability for a specific earthquake on a particular fault within a particular year. Scientists estimate earthquake probabilities in two ways: by studying the history of large earthquakes in a specific area and the rate at which strain accumulates in the rock.




    This time-exposure photograph of the electronic-laser, ground-motion movement system in operation at Parkfield, California, to track movement along the San Andreas fault. Full size image - 40 k


    Scientists study the past frequency of large earthquakes in order to determine the future likelihood of similar large shocks. For example, if a region has experienced four magnitude 7 or larger earthquakes during 200 years of recorded history, and if these shocks occurred randomly in time, then scientists would assign a 50 percent probability (that is, just as likely to happen as not to happen) to the occurrence of another magnitude 7 or larger quake in the region during the next 50 years.

    But in many places, the assumption of random occurrence with time may not be true, because when strain is released along one part of the fault system, it may actually increase on another part. Four magnitude 6.8 or larger earthquakes and many magnitude 6 - 6.5 shocks occurred in the San Francisco Bay region during the 75 years between 1836 and 1911. For the next 68 years (until 1979), no earthquakes of magnitude 6 or larger occurred in the region. Beginning with a magnitude 6.0 shock in 1979, the earthquake activity in the region increased dramatically; between 1979 and 1989, there were four magnitude 6 or greater earthquakes, including the magnitude 7.1 Loma Prieta earthquake. This clustering of earthquakes leads scientists to estimate that the probability of a magnitude 6.8 or larger earthquake occurring during the next 30 years in the San Francisco Bay region is about 67 percent (twice as likely as not).

    Another way to estimate the likelihood of future earthquakes is to study how fast strain accumulates. When plate movements build the strain in rocks to a critical level, like pulling a rubber band too tight, the rocks will suddenly break and slip to a new position. Scientists measure how much strain accumulates along a fault segment each year, how much time has passed since the last earthquake along the segment, and how much strain was released in the last earthquake. This information is then used to calculate the time required for the accumulating strain to build to the level that results in an earthquake. This simple model is complicated by the fact that such detailed information about faults is rare. In the United States, only the San Andreas fault system has adequate records for using this prediction method.





























































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































    hi



































































    lol


















    lol

























































































































































    Both of these methods, and a wide array of monitoring techniques, are being tested along part of the San Andres fault. For the past 150 years, earthquakes of about magnitude 6 have occurred an average of every 22 years on the San Andreas fault near Parkfield, California. The last shock was in 1966. Because of the consistency and similarity of these earthquakes, scientists have started an experiment to "capture" the next Parkfield earthquake. A dense web of monitoring instruments was deployed in the region during the late 1980s. The main goals of the ongoing Parkfield Earthquake Prediction Experiment are to record the geophysical signals before and after the expected earthquake; to issue a short-term prediction; and to develop effective methods of communication between earthquake scientists and community officials responsible for disaster response and mitigation. This project has already made important contributions to both earth science and public policy.

    Scientific understanding of earthquakes is of vital importance to the Nation. As the population increases, expanding urban development and construction works encroach upon areas susceptible to earthquakes. With a greater understanding of the causes and effects of earthquakes, we may be able to reduce damage and loss of life from this destructive phenomenon.

    The original "Earthquakes" publication is one of a series of general interest publications prepared by the U.S. Geological Survey to provide information about the earth sciences, natural resources, and the environment. To obtain a catalog of additional titles in the series "General Interest Publications of the U.S. Geological Survey," write:



    Print copies for sale by the:



    Last reprinted: 1995




    As the Nation's prinicipal conservation agency, the Department of the Interior has responsibility for most of our nationally owned public lands and natural and cultural resources. This includes fostering sound use of our land and water resources; protecting our fish, wildlife, and biological diversity; preserving the environment and cultural values of our national parks and historical places;and providing for the enjoyment of life through outdoor recreation. The Department assesses our energy and mineral resources and works to ensure that their development is in the best interestes of all our people by encouraging stewardship and citizen participatio in their care. The Department also has a major responisibility for Amercan Indian reservation communities and for people who live in island territories under U.S. administraion.

    link to this | view in thread ]

  33. identicon
    lolzer woman, 20 Mar 2008 @ 7:55am

    haz lolz man

    lol lame

    link to this | view in thread ]

  34. identicon
    LaVerne, 16 Dec 2008 @ 12:24pm

    STIFFER PUNISHMENT

    I have been wanting to scream this to whom ever would listen. It is wrong for the people who commit these terrible crimes to sit in a warm cell and have their food brought to them while they watch t.v. It is bull****. I am furios over it. Times for hard working people are getting tougher while criminals dont have a worry in the world. No wonder they are commiting crimes. Do you really think its going to get better before it gets worse?

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.