The Necessity of a Federal Criminal Revenge Porn Statute

•February 17, 2018 • 12 Comments

***Posted on behalf of Cate Nowak***

Revenge Porn: Introduction, Statistics, Forms, and Ramifications

Nonconsensual pornography, or revenge porn as it is commonly called, is a relatively new method of sexual exploitation and abuse that involves disseminating sexually graphic images or videos of an individual without said individual’s permission. The definition of revenge porn includes both pictures and videos created without consent, as well as pictures or videos that were created with consent and then distributed without consent [1].

With the internet becoming increasingly popular and prevalent in our everyday lives, the frequency of revenge porn has increased exponentially. Studies have shown that one in 25 people in the United States have either been the victim of revenge porn or have been threatened with their illicit photos being posted [2]. For girls and women between the ages of 15 and 29, this increases to one in ten [2]. For LGB individuals, this increases to 1.7 in ten [2].

Revenge porn can come in a number of forms. One popular form is for an ex-boyfriend or ex-girlfriend to begin distributing illicit material to others, often in an educational setting such as school. Our society often trivializes this type of revenge porn, however the effects can be disastrous, including these young victims committing suicide due to constant humiliation and harassment [3].

Another common form that revenge porn takes is a victim’s personal information being posted alongside the illicit material. This information may include the victim’s name, phone number, address, and place of work, among others. These posts may also include verbiage to make it appear that the victim posted the material themselves. This can include wording encouraging strangers to contact her/him as she/he is looking for random sexual encounters, or that she/he is looking for masochistic partners [1]. These types of posts are frequently linked to the victim’s social media or professional profiles [4].

Like most other things in the virtual world, the ramifications of revenge porn do not stay in the virtual world. Victims almost always experience some sort of real world effect, ranging from anxiety and depression to losing educational and career opportunities [1]. Revenge porn has been shown to increase the likelihood of real world stalking, sexual assault, and physical assault [1].

Victims of revenge porn often suffer severe emotional and mental health repercussions. A study conducted by the Cyber Civil Rights Initiative revealed that over 80% of revenge porn victims suffer from severe emotional distress and anxiety [5]. Severe emotional distress and anxiety often lead to other physical manifestations, further injuring the victim [6]. If the anxiety and emotional stress become too severe, this may lead to the loss of her/his career.

It is not uncommon for victims to lose their jobs, be prevented from getting a job, or not get accepted into a school because of the revenge porn. When a victim’s personal information is posted with the illicit material, the victim is often targeted while at work [4]. This can lead to the loss of her/his job. Additionally, when the victim’s personal information is included with the post, all it takes is a quick google search of her/his name to reveal the revenge porn. As most universities and employers are googling prospective students and employees, victims are being denied due to the existence of this revenge porn [4].

Civil Options Force Criminal Options in Several States

Recently, the need for statutes criminalizing revenge porn has become apparent. Without a way to hold perpetrators criminally liable, victims have had to seek out relief through civil means. The two most common civil options have been tort law and copyright law, however both of these are rather ineffective and unavailable to many victims [1].

These civil remedies are primarily ineffective because they often cannot achieve the one thing victims truly want and need: to have the illicit material of them removed from the internet [1]. Furthermore, victims are usually unable to afford a lawyer to help them through a lengthy lawsuit.

Though copyright law should be able to achieve the removal of content in theory, it has not been able to. If a victim was the creator of the material (aka a “selfie”), then victims may use copyright law to force websites to remove the content [1]. However, because of the nature of the internet, taking down a single post does not do anything. Once the illicit material is shared, it is usually reposted on tens to hundreds of other websites [4]. Additionally, a significant number of websites will refuse to remove the content, though legally mandated to because they know most victims cannot afford a lawyer [4].

Tort law is equally ineffective at removing the content from the internet. Tort law has an added layer (or two) of inefficiency, as even if a victim can afford a lawyer, most defendants are judgement-proof and it is relatively hard to find a lawyer to who is knowledgeable enough about this area of law to provide effective assistance [4].

In the past decade, it has become apparent that there is a real need for an effective deterrent. A study conducted by the Cyber Civil Rights Initiative revealed that 60% of perpetrators said that harsh criminal punishment (e.g. felony laws including imprisonment) would be the largest deterrence factor for them [7].

At this time, 38 states and Washington DC have implemented statutes criminalizing revenge porn [8]. Though this is a step in the right direction for victims, these statutes vary tremendously, making it impossible for a consensus to be reached detailing what constitutes criminal revenge porn, how serious of a crime it is, and what the punishment for it should be.

States place different requirements on what constitutes criminal revenge porn. Some states require that the dissemination must be for pecuniary gain [8]. Others state that the posting must be made with the intent to harass [8]. With so many varying ideas of what revenge porn is, victims, perpetrators, and the legal system are unable to get a firm grasp on what is actually a criminal act.

Some states even define by statute who may commit the crime of posting revenge porn. In Colorado, the statute explicitly states that the perpetrator must be at least 18 years old in order to be criminally liable for posting revenge porn [9]. Under this law, a minor who posts revenge porn of an adult would not be criminally liable.

Aside from what revenge porn is and who can commit this crime, states have not reached a consensus on what level of a crime this should be classified as or what the criminal punishment should be. States with existing criminal statutes for revenge porn are relatively split on the issue of misdemeanor and felony [8]. However, some states classify the crime different depending on the age of the perpetrator [8]. Others increase the class of the crime depending on various factors, such reoffending, the means by which it was transmitted, or if it caused different types of harm to the victim [8].

Federal Bill

On November 28, 2017, Senator Kamala Harris introduced a new bill to the Senate that would make the posting of revenge porn a federal crime [10].  ENOUGH stands for “Ending Nonconsensual Online User Graphic Harassment”.

The proposed bill will make it “unlawful to knowingly use any means. . . to distribute an intimate visual depiction of an individual” while having knowledge or having reckless disregard that the individual did not consent to the distribution, had a reasonable expectation of privacy, and that poster knew the harm that could be caused by distributing the material [10]. However, this bill carves out an exception if the poster had an objectively reasonable belief that the distributed material touched upon a matter of public concern [10]. There are also exceptions for legal law enforcement purposes, for reporting such content, and for documentation in legal proceedings [10].

The proposed bill does not define whether the crime would be a misdemeanor or felony, but places a penalty of imprisonment not exceeding five years, or a fine, or both, on the crime [10]. The bill explicitly states that these penalties may apply to someone who intentionally threatens to commit the crime, regardless of whether it is by extortion or some other threat [10].

Our current patchwork of state laws and civil remedies does not adequately attest to the seriousness of revenge porn. If passed, this law would give support to victims by acknowledging the seriousness of the crime. This law would also give a clear warning to possible perpetrators, adequately informing them of what constitutes criminal acts.

Presently, it is difficult to prosecute revenge porn due to the fact that the internet does not see state boundary lines. Venue is often a contested issue, as state criminal statutes are inconsistent and can give hefty advantages to one side or the other. Additionally, in some states revenge porn is not even a crime. By making revenge porn a federal crime, prosecuting revenge porn would be simplified in a sense due to the continuity in law. The text of the bill states that venue would be proper in the district where “the defendant or depicted individual resides or in a district where the intimate visual depictions are distributed or made available” [10]. If the defendant or victim is a citizen or permanent resident of the United States, there will also be extraterritorial Federal jurisdiction over the matter [10].

Social Media’s response

Several major social media websites have given their support to the criminalization of revenge porn, and have taken steps to try to remove this type of content as well as prevent it from ever being posted on their platforms in the first place. Both Facebook and Twitter have publicly given their support to the federal bill that was proposed at the end of last year.

This comes after Facebook had a rather astonishing document leaked from January 2017. This document showed that during the month of January alone, Facebook had over 54,000 reports of revenge porn to evaluate [11]. As a result, Facebook disable over 14,000 accounts that month due to revenge porn [11].

With social media become so prevalent and such a large platform to use for exploitation, it’s imperative that social media websites take revenge porn seriously, support its criminalization, and take measures to remove and prevent revenge porn on their sites. To further combat revenge porn, Facebook has implemented a program that uses artificial intelligence and image recognition to not only catch revenge porn, but to also remove and prevent already reported revenge porn [3].

This move may be in response to a lawsuit that Facebook faced two years ago. Facebook was party to a lawsuit brought on behalf of a 14-year-old girl who was harassed by a man repeatedly posting a nude picture of her on Facebook [12]. Though Facebook removed the content “on more than one occasion”, the man was able to keep posting her photo [12]. The girl’s lawyer argued that Facebook should not have allowed the image to be repeatedly reposted after it had already been flagged as inappropriate and taken down [12].

Possible Topics to Respond to:

As described above, states have begun responding to revenge porn over the past five years by enacting statutes criminalizing it. However, there has been a push to criminalize it through federal law. Do you think one would be more effective than the other, and what factors need to be addressed in making this decision?

A large portion of revenge porn victims are minors (middle and high school age). Often, these minors are victimized by other minors.

How does the fact that this is technically child pornography affect the discussion?

In this type of situation, do you think there should be different criminal penalties for perpetrators who are also minors vs. perpetrators who are adults? What about minors posting revenge porn of adults?

If a federal law was passed criminalizing revenge porn, how would this interact with the federal law prohibiting child pornography?

The proposed federal bill puts the burden of proof on the victim to prove that the perpetrator knew the victim expected them to remain private. Is it proper to place this burden on the victim? Or should the burden be placed on the perpetrator to prove that he/she did not know the victim expected the material to be kept private?


[1] Danielle Keats Citron & Mary Anne Franks, Criminalizing Revenge Pornography, 49 Wake Forest L. Rev. 345 (2014). See

[2] Chris Morris, Revenge Porn Law Could Make It A Federal Crime To Post Explicit Photos Without Permission, Fortune Magazine (Nov. 28, 2017). See

[3] Sara Ashley O’Brien, Facebook Launches Tools To Combat Revenge Porn, CNN Tech (Apr. 5, 2017). See

[4] Adrienne N. Kitchen, The Need to Criminalize Revenge Pornography: How a Law Protecting Victims Can Avoid Running Afoul of the First Amendment, 90 Chi.-Kent. L. Rev. 247 (2015). See

[5] Cyber Civil Rights Initiative, Nonconsensual Porn: A Common Offense (June 12, 2017). See

[6] Harvard Medical School, Anxiety and Physical Illness, Harvard Women’s Health Watch, Harvard Health Publishing (June 6, 2017). See

[7] Cyber Civil Rights Initiative, Nonconsensual Porn: A Common Offense (June 12, 2017). See

[8] Cyber Civil Rights Initiative, 38 States + DC Have Revenge Porn Laws. See

[9] C.R.S.A. § 18-7-107 (2014).

[10] The ENOUGH Act, S. 2162, 115th Cong. (2017).

[11] Nick Hopkins, Facebook flooded with ‘sextortion’ and ‘revenge pornography’, files reveal, The Guardian (May 22, 2017). See

[12] Ivana Kottasova, Facebook Faces Revenge Porn Trial Over Teenager’s Image, CNN Tech (Sept. 13, 2016). See


Online Gambling Serial Blog – “Loot Boxes” (Blog Post 1 of 7)

•February 17, 2018 • 4 Comments

Since online gambling in the United States has caused many legal uncertainties, including how individual states have dealt with various online gambling issues and how the advancements in internet-based technologies have created uncertainty enforcing online gambling issues, it is necessary for there to be more clear rules and regulations regarding legal issues relating to online gambling in the United States. This blog will discuss various online gambling issues in a seven-part serial blog. The first blog post deals with “loot boxes.”

Issues related to online gambling continue to evolve. One of the clearest examples of this evolution is the new issue of “loot boxes.” Specifically, two issues that will be discussed in this blog post are the issues of underage gambling related to “loot boxes” and the posting of odds for obtaining specific items from randomized “loot boxes.”

We will first start with what exactly a “loot box” is. A “loot box” can be described as a mechanism that provides randomized virtual items for purchase in a game. [1] Many games rely on “loot boxes” as huge sources of income, given that many of the games are free-to-play titles where players can play the games for free but purchase mystery boxes that may or may not contain in-game items the players want. [4]

Video game publishers have begun to employ predatory mechanisms designed to exploit human psychology to compel players to keep spending money in the same way that casino games are designed and can present the same psychological, addictive, and financial risks as gambling. [2] These mechanisms allow players to purchase chances at winning rewards within games, which is comparable to a slot machine. [2]

If such purchases of “loot boxes” from gamers are considered gambling, the question becomes whether people under the age of 21 can legally engage in the in-game activities and purchase “loot boxes.” Leading the charge for legislation to limit underage gambling relating to “loot boxes” has been Hawaii.

According to Hawaii House Bill 2686 and its accompanying Senate version, proposed legislation would prohibit retailers (including those that operate online) from selling games that include “a system of further purchasing a randomized reward or rewards” to anyone under 21 years old. [3] The legislature cited a 2011 study showing that 91% of youth aged 2 through 17 played video games and that mental health experts have raised concerns about the exposure of people under the age of 21 to gambling-like mechanisms which can affect cognitive development and lead to addiction to which people under the age of 21 are particularly vulnerable. [2] There is currently no age restriction or disclosure required at time of purchase for video games that contain “loot boxes” and other exploitive gambling-like mechanisms. [2] Game publishers can insert “loot boxes” into games at any time with game updates without prior player or parental knowledge. [2]

Meanwhile, Hawaii’s House Bill 2727 would require game publishers to publicly disclose the odds of obtaining specific items from randomized “loot boxes” in their games. [3] The odds disclosure bill also allows the Hawaii Department of Commerce to audit the game code to confirm those odds, much as existing state gambling laws allow full audits of slot machine code. [3] Without the odds, there’s no way to really know how likely you are to get the item you want, something that even real cash lotteries are required to disclose. [4] Hawaii House Bill 2727 would also require “a prominent, easily legible, bright red label” to appear on games with “loot boxes” (or their online retail pages) warning of “in-game purchases and gambling-like mechanisms which may be harmful or addictive.” [3]

Before any of this legislation has passed, Apple has already taken steps to comply with possible legislation that could be passed in the near future. Apple updated its iOS App Store Rules to require any game or app that utilizes “loot boxes” to disclose the odds of receiving each type of item from the “loot box” to customers prior to purchase. [1] Apple has beaten Google to the punch in announcing this kind of “loot box” odds rule, as Android’s Google Play contains no such rules or requirements. [1]

Besides from Hawaii, other states such as Washington and Indiana have introduced legislation to deal with “loot boxes.” In Washington, three Democratic state senators introduced a bill that would require the state gambling commission to examine loot boxes and determine “whether games and apps containing these mechanisms are considered gambling under Washington law.” [6] In Indiana, new legislation requires the attorney general to study certain issues concerning the use of loot boxes in video games and to make a recommendation whether loot boxes should be regulated as gaming in Indiana. [7]

Other countries have already implemented regulations related to “loot boxes.” In China, the Ministry of Culture released an updated set of rules that went into effect on May 1, 2017. [5] Online game operators now need to disclose the name, property, content, quantity, and draw/forge probability of all virtual items and services that can be drawn/forge on the official game website or a dedicated draw probability webpage of the game. [5] Online game operators also now need to publicly announce the random draw results by customers on either the official website or in-game and keep those records for more than 90 days. [5] Two-step payment confirmation via email or text is also now needed to be sent to the user every time a transaction is made to stop accidental payments from young children or the user. [5]

The Entertainment Software Association, an industry trade group, has said in previous statements that it considers “loot boxes” to be “a voluntary feature” that lets “the gamer make the decision” to “enhance their in-game experience.” [3] The Entertainment Software Rating Board (ESRB) said in its own statement that “while there’s an element of chance in these mechanics, the player is always guaranteed to receive in-game content (even if the player unfortunately receives something they don’t want).” [3] “Loot box” issues are sure to arise in the future as more games incorporate this feature.

Are in-game “loot boxes” purchases considered to be gambling? Should games be required to disclose such “loot boxes” prior to user purchases? Should states or the federal government take the lead regarding regulations dealing with “loot boxes?” Do the regulations in China go too far? Should the ESRB be required to limit games with “loot boxes” to adults? These are just some questions relating to “loot boxes” that need to be answered. It will be interesting to see how states, and perhaps the federal government, address these issues in the future.









Intimate Partner Violence & the Internet

•February 9, 2018 • 13 Comments

Initial note on terminology: Domestic violence is often used as a synonym for intimate partner violence (or IPV), as is seen with the National Coalition Against Domestic Violence, for example. In the larger sense, “domestic violence” tends to include the family unit more broadly—like violence by a parent against a child, by one sibling against another. Domestic violence clearly involves intimate partner violence—that is, intimate partner violence is a subset of domestic violence.

Intimate partner violence—a subset of domestic violence—is a pervasive problem in society, and the Internet isn’t necessarily making that better, particularly in regards to stalking. The Internet allows threatening behavior to occur with various levels of anonymity and, in many cases, provides for easier access for abusers across state lines or in more evasive ways. Geographically, this complicates jurisdiction. This is particularly so since state courts often have their own domestic violence/stalking-based statutes (IE: in Florida, Fla. Stat. § 784.046 (2017) deals with petitions for injunctions for protection against repeat violence, sexual violence, or dating violence), but interstate threats often trigger interstate commerce issues, as we will see.

I. DV Facts & Figures:

According to the National Coalition Against Domestic Violence (NCADV), every year, at least 10 million people are physically abused by an intimate partner. ( NCADV defines domestic violence as “the willful intimidation, physical assault, battery, sexual assault, and/or other abusive behavior as part of a systematic pattern of power and control perpetrated by one intimate partner against another,” often including some combination of physical, sexual, psychological, or emotional abuse. (

Abusive behavior revolves around power and control, the two key factors that abusers use to manipulate their partners. ( Such behaviors include, for example, controlling household monies, dictating the partner’s dress, controlling where the partner goes and who they see, intimidation via threats and weapons, the destruction of personal property of the partner, and close monitoring of day-to-day actions (or, rather, stalking, the focus of this post). This list is non-exhaustive because as society develops, new methods of manipulation emerge. In recent years, internet-based IPV harassment and stalking—cyberstalking—have become more and more prevalent, and with that prevalence, they have become more dangerous.

There is general consensus that “stalking, in and of itself, is a serious crime.” (Groban 12). Stalking affects a huge portion of the American population each year. For reference, a 2012 U.S. Department of Justice survey estimated that 3.3 million people over the age of 18 were stalking victims during a 12-month period. (Groban 12). The CDC found that, during the January-December 2011 reporting period, “an estimated 15.2% of women and 5.7% of men had been a victim of stalking during their lifetime.” (CDC MMWR Surveillance Summaries, Vol. 63, No. 8, Sept. 5, 2014, at 1). Furthermore, “the confluence of domestic violence and stalking is particularly dangerous,” for both men and women. (Groban 13). One study suggests that over 75% of women murdered by their intimate partner had previously been stalked by the partner. Id. at 14.

As we move forward, bear in mind the following:

“Society has immeasurably benefitted from modern technological advances, yet cyberspace has proven to be a regrettably fertile landscape for stalkers and abusers. IPV relationships go beyond the physical component of violence; authorities characterize IPV as defined by patterns of abuse that are centrally geared towards the elimination of personal autonomy and the establishment and maintenance of control.”

(Chung, An Old Crime in A New Context: Maryland’s Need for A Comprehensive Cyberstalking Statute, 17 U. Md. L.J. Race, Religion, Gender & Class 117, 120–21 (2017) 120-21).

II. Cyberstalking & Law:

It is becoming more apparent that the domestic violence stalking problem is “compounded and magnified when stalkers avail themselves of cyber tools.” (Groban 12). Danielle Citron, a University of Maryland Law professor and cyberstalking specialist, describes cyberstalking, “at its most basic legal definition” as “a repeated course of conduct that’s aimed at a person designed to cause emotional distress and fear of physical harm.” (What the Law Can (and Can’t) Do About Online Harassment, Marlisse Silver Sweeney, The Atlantic). Such conduct may include overt threats of violence, the spreading of lies about the victim that are asserted as fact, the distribution of personal sensitive information of the victim, and technology attacks or hacking. Id. “Often, it’s a perfect storm of all these things.” Id.

Cyberstalking seems to work in tandem with traditional stalking, as was seen in the 2015 case U.S. v. Matusiewicz. In this extraordinary case, interstate and cyberstalking “took the form of a three-prong campaign by the victim’s ex-partner and his family, which used the Internet, the mail, and third parties to vilify and torment [the victim] and her children.” (US v. Matusiewicz: Lessons Learned From the First Federal Prosecution of Cyberstalking Resulting in Death, Jamie M. McCall & Shawn A. Weede, United State’s Attorneys’ Bulletin May 2016, 17). Matusiewicz was the first federal prosecution of cyberstalking that resulted in death. Id.

  • Elonis:

Also in 2015, the U.S. Supreme Court decided Elonis v. United States, which dealt with a complicated and murky area of the law—interstate threats via the Internet that were not expressly created to reach the victim. (Elonis v. United States: Consequences for 18 U.S.C. § 875(c) and the Communication of Threats in Interstate Commerce, Gretchen C. F. Shappert, United States Attorneys’ Bulletin May 2016, 30). Ultimately, this case dealt with the “reckless transmission of threatening communications.” Id. at 41. The defendant posted various types of poems, raps, and statements on his Facebook page expressing rage towards his separated wife (for example, he once posted: “There’s only one way to love you but a thousand ways to kill you. I’m not going to rest until your body is a mess….”). Id. at 31.

The trouble law enforcement officers and later prosecutors faced with these posts, however, was in concluding the minimum mental state for criminal culpability in interstate threats was met. Id. “Criminal intent to violate 18 U.S.C. § 875(c) [the federal statute dealing with threatening interstate communication] can be established by showing that the defendant transmitted the communication ‘for the purpose of issuing a threat, or with knowledge that the communication will be viewed as a threat’.” Elonis, 135 S.Ct. at 2012. Could reckless transmission of a threatening communication constitute criminally culpable behavior?

Unfortunately, the Supreme Court declined to clearly define the requisite mental state; instead, the Court focused in part on a jury instruction, which failed to “require proof of recklessness.” (Shappert 35). The Court also glossed over the First Amendment issues “surrounding the reckless transmission of threatening communication.” Id. at 41. Ultimately, “the Elonis decision offers only very limited guidance to practitioners and ultimately leaves open for another day [the queston of] whether recklessly conveying or transmitting a threat is sufficient to establish criminal intent.” Id.

  • Culpability:

What should be the level of culpability, then? Where overt violence has not occurred and abusers are merely recklessly making such overtures as in Elonis, it seems federal law has yet to provide true recourse for victims. In state courts, victims may sometimes seek injunctions against cyberstalking (see Fla. Stat. 784.0485(1), which explicitly enumerates cyberstalking as a stalking offense), but how likely is it that such a petition will be successful? Where cyberstalking occurs but does not reach the graphic nature of that in Elonis, it may be harder for petitioners to prevail.

Concluding thought:

“Right now, there are a handful of ways victims can address their attackers through the legal system, both civilly and criminally. Unfortunately, many of them are costly and invasive, and combined with a lack of education and precedent, these channels don’t always offer the justice people are seeking. The law is notoriously slow to adapt to technology, but legal scholars say that if done right, the law can be used as a tool to stop this behavior.”

(What the Law Can (and Can’t) Do About Online Harassment, Marlisse Silver Sweeney, the Atlantic)

A few questions for discussion:

  1. What evidentiary issues are presented by cyberstalking? Are there issues of evidence preservation?
  2. A strong First Amendment argument can be made in favor of those who make threatening overtures without directly threatening their significant other, particularly on social media. Bearing in mind the strong correlation between domestic violence stalking and femicide, as discussed above, how can the law support both the defendant/respondent’s First Amendment right and domestic violence survivors?
  3. What are some basic features of new law that you think should be a priority for protective domestic violence/intimate partner violence survivors from cyberstalking?
  4. Should police officers be trained in responding to cyberstalking? If so, what are some things the training should involve?



  1. CDC MMWR Surveillance Summaries, Vol. 63, No. 8, Sept. 5, 2014, at 1
  2. Christie Chung, An Old Crime in A New Context: Maryland’s Need for A Comprehensive Cyberstalking Statute, 17 U. Md. L.J. Race, Religion, Gender & Class 117, 120–21 (2017)
  3. Elonis v. United States: Consequences for 18 U.S.C. § 875(c) and the Communication of Threats in Interstate Commerce, Gretchen C. F. Shappert, United States Attorneys’ Bulletin May 2016, 30 (
  4. Intimate Partner Cyberstalking—Terrorizing Intimate Partners with 21st Century Technology, by Margaret S. Groban, United States Attorneys’ Bulletin, May 2016, 12 (
  5. Marlisse Silver Sweeney, What the Law Can (and Can’t) Do About Online Harassment, the Atlantic, (
  6. National Coalition Against Domestic Violence (NCADV),
  7. Power and Control Wheel,
  8. US v. Matusiewicz: Lessons Learned From the First Federal Prosecution of Cyberstalking Resulting in Death, Jamie M. McCall & Shawn A. Weede, United State’s Attorneys’ Bulletin May 2016, 17 (


Social Harms on the Internet: Sextortion, Revenge Porn, and Catfishing

•November 14, 2016 • 10 Comments

Technology has become an integral part of our everyday lives. Technology is present at work, at home, at the grocery store, and even in our personal relationships. There is no surprise that the presence of internet has transformed our private lives into being more public. But, for some people, the internet has been used to invite the public into the most private parts of life, or has been used to launch unwarranted character attacks. While most of these crimes existed before technology, the internet has changed the way that crimes are committed and how large of an audience the crime reaches.

Sextortion technically is not a legal term, but it describes “old-fashioned extortion or blackmail, carried out over a computer network, involving some threat…if the victim does not engage in some form of further sexual activity.”[1]  These cases usually involve hackers who target their victims by obtaining sexual images or videos to use as leverage to get what they want. [2] These hackers typically target several victims and the victims tend to be women or people under eighteen years old. [3] For example, Luis Mijangos victimized about 230 people by tricking them into downloading malware on their computer and then using that malware to access all of their files. [4] He would obtain images or videos of the victim and then email them demanding more pornographic material of the victim and threaten to post the images or videos online if the demands were not met. [5] He used the key logger to track everything the victim typed and he would threaten the victims further if he found out they were talking to anyone about the situation. [6]

Since sextortion is a relatively new crime, there are few laws that directly criminalize the behavior. Defendants are commonly prosecuted under crimes such as hacking, stalking, extortion, or child pornography. [7] This lack of specific laws also leads to inconsistent sentencing. [8] When the defendants are prosecuted under child pornography statutes, then they receive pretty stiff sentences. [9] But when they are prosecuted under other statutes, the sentences are much lighter. Luis Mijangos, for example, was only sentenced to six years despite the fact that he had “15,000 webcam-video captures, 900 audio recordings, and 13,000 screen captures on his computers…possessed files associated with 129 computers and roughly 230 people…and 44 of his victims were determined to be minors.” [10]

While the goal of sextortion is to obtain something (usually sexual images of the victim), non-consensual pornography (NCP) is another crime that harasses victims with sexual images for the mere sake of vengeance. [11] NCP is more commonly known as revenge porn. [12] NCP is “the act of distributing sexually explicit photos or videos over the Internet without the subject’s consent and with the intent to embarrass or shame the subject.” [13] This is commonly committed by jilted exes but can also be done by hackers. [14] There have been websites created just for this purpose that allow the user to also include personal information such as the victim’s name, social media profiles, or job. [15]

It is difficult for justice to be served for the victims of NCP. Victims cannot go after the website because, as distasteful and immoral as they are, the websites are protected by Section 203 of the Communications Decency Act. [16] Section 230 protects website providers from liability for user-posted content. [17] So, the CEOs of these websites cannot be prosecuted just because user Joe Schmo posted a sexual photo of his ex-girlfriend and included her name and workplace so that people could go harass her at work. The only way around Section 230 protection is if the website provider contributes to the post by editing or adding content. [18] Otherwise, the only person to prosecute is the user and laws against NCP are still in the trial-and-error phase of legislation in most states.

Some states have attempted to criminalize the behavior. Vermont passed a bill that penalizes someone with up to two years in prison and a $2,000 fine for posting sexually explicit photos of a person without consent. [19] The bill included language about requires the defendant to have intended to harm the victim and for a reasonable person to suffer harm because of the disclosure. [20]. The bill will hopefully punish the behavior without being overbroad. California laws attempt to encompass NCP under the cyber harassment statute. According to California Penal Code Section 653.2(a), it is illegal for a person to “inten[d] to place another person in reasonable fear for his or her safety…by means of an electronic communication device, and without consent of the other person, [and] electronically distributes, publishes, e-mails, hyperlinks, or makes available for downloading, personal identifying information, including, but not limited to, a digital image.” [21] The California law is likely to successfully criminalize the behavior because it is specific about the crime and intent without being too narrow. NCP laws in New Jersey, Idaho, and Wisconsin do not include the element of intent. [22] Therefore, the laws are restricting this content of the material posted rather than the defendant’s state of mind. [23] This is a problem because restricting content skirts the line of infringing on a person’s First Amendment rights.

Victims might have better luck in state court filing claims of intentional infliction of emotional distress, invasion of privacy, sexual harassment, or copyright infringement, depending on the facts of their case. [24] However, these claims are not always easy to win and even if the victim is awarded a judgment, the defendant might not have the ability to pay. At that point, the victim has gone through the emotional struggle of being harassed, the stress of a court case, and fronted the money for a lawyer to fight the case only to win a judgment that will never be paid out. Going to court also requires the victim to confront the issue head on and continue publicizing the information and the ordeal. Some victims are already so embarrassed by the defendant’s actions that they do not want to go through the anguish any longer. So, even though there are avenues in civil court for victims to take, there are plenty of reasons that someone might not see those as viable options. [25]

Imagine, you meet someone online, his or her social media page doesn’t raise and red flags to you and so you start chatting. Soon you have fallen for this person and you want to meet in real life. But, every time you plan to meet, this person bails at the last minute. Eventually you get suspicious and start researching this person a little more. Come to find out, this person is a total fake who stole some model’s photos and your whole relationship was built on lies. Congratulations, you have been catfished. Catfishing is another crime that has recently become more popular. The legal term for catfishing is online impersonation. [26] Online impersonation occurs when someone creates a fake online profile to deceive others, usually to bait the victim into a romantic relationship. [27] These relationships can result in a fake online relationship or cyberbullying, so legislation must be clear. [28] Unfortunately, social media websites are not required to verify identity of its users before a profile is created. [29] Sites like Facebook include prohibit online impersonation in their Terms and Conditions and provides users with the ability to report issues of online impersonation. But, this is all that they are required to do. [30] Many states are endeavoring to enact laws against online impersonation, but none have been passed and analyzed over time yet.

These social harms that are inflicted on the internet are becoming more and more prevalent. The states and federal government need to enact legislation to appropriately deal with the new behaviors. The biggest problem with enacting legislation to combat internet or technology crimes is that technology is always changing, so it is difficult to be proactive rather than reactive. There will always be a new behavior that victimizes people through the internet. Hopefully, the government can find ways to keep up with technological advances to prevent social harms from escalating too far.


  1. How do you think sextortion should be criminalized and prosecuted more consistently?
  2. Should Section 230 be amended to be narrower? If so, how should it be amended?
  3. Should these crimes all be criminalized or is it best for social harms to be left to civil court?
  4. Aside from criminal charges or civil complaints against the perpetrator, should there be any other recourse for victims?












[11] Emily Poole, Fighting Back Against Non-Consensual Pornography, 49 U.S.F. L. Rev. 181.

[12] Emily Poole, Fighting Back Against Non-Consensual Pornography, 49 U.S.F. L. Rev. 181.

[13] Emily Poole, Fighting Back Against Non-Consensual Pornography, 49 U.S.F. L. Rev. 181.

[14] Emily Poole, Fighting Back Against Non-Consensual Pornography, 49 U.S.F. L. Rev. 181.

[15] Emily Poole, Fighting Back Against Non-Consensual Pornography, 49 U.S.F. L. Rev. 181.



[18] Emily Poole, Fighting Back Against Non-Consensual Pornography, 49 U.S.F. L. Rev. 181.




[22] Emily Poole, Fighting Back Against Non-Consensual Pornography, 49 U.S.F. L. Rev. 181.

[23] Emily Poole, Fighting Back Against Non-Consensual Pornography, 49 U.S.F. L. Rev. 181.

[24] Emily Poole, Fighting Back Against Non-Consensual Pornography, 49 U.S.F. L. Rev. 181.

[25] Emily Poole, Fighting Back Against Non-Consensual Pornography, 49 U.S.F. L. Rev. 181.

[26]  Kori Clanton, We Are Not Who We Pretend To Be: ODR Alternatives to Online Impersonation Statutes, 16 Cardozo J. Conflict Resol. 323.

[27]  Kori Clanton, We Are Not Who We Pretend To Be: ODR Alternatives to Online Impersonation Statutes, 16 Cardozo J. Conflict Resol. 323.

[28]  Kori Clanton, We Are Not Who We Pretend To Be: ODR Alternatives to Online Impersonation Statutes, 16 Cardozo J. Conflict Resol. 323.

[29]  Kori Clanton, We Are Not Who We Pretend To Be: ODR Alternatives to Online Impersonation Statutes, 16 Cardozo J. Conflict Resol. 323.

[30]  Kori Clanton, We Are Not Who We Pretend To Be: ODR Alternatives to Online Impersonation Statutes, 16 Cardozo J. Conflict Resol. 323.


Online Human Trafficking, Privacy, and 1st Amendment Concerns

•November 14, 2016 • 11 Comments

Human trafficking is an age old problem spanning millennia. In this millennium, it is a problem which has found its way into a new venue – the internet. In the internet, human traffickers have found a means through which they may market their product while disguising themselves from authorities and law enforcement. While there are many ways through which this may be done, with varying levels of success, one way that has been used successfully is the use of advertisements on the Deep Web. [1] 90% of the Deep Web does not appear to be indexed by search engines with which we are familiar – think Google, Bing, and Yahoo – and so are largely unknown spaces to most Internet users. [2] In this information vacuum could potentially be found any number of things or any amount of information that could be used in the fight against things like child pornography, money laundering, drug trafficking, and, the focus of this piece, human trafficking.

The Defense Advanced Research Projects Agency is a branch of the Department of Defense tasked with developing emerging technology for the U.S. military has stepped into the human trafficking sphere ever so lightly by developing a search engine that is capable of searching the internet for information concerning human trafficking that could lead to successful future prosecutions [3][4] The program that it has designed is known as “Memex”. [5] Think of Memex not just as a search engine, which reacts to the keywords entered by a user, but rather as a program finds information and returns it to the user AND also shows the user assorted patterns in data and stored information, relationships between different pieces of information and data (such as an e-mail address and all sex advertisements accessed by the holder of that e-mail address), and shows the user the places where the largest concentrations of specific kinds of information (such as where sex advertisements are being posted most frequently) [6]

The ramifications of widespread use of this technology seem clear – the technology could prove an excellent tool in the extermination of human trafficking through the Deep Web, it could also leave privacy rights activists a little queasy about the prospect of government having yet another means through which to monitor the activity of Internet users globally and domestically. One offshoot from the Memex program has taken place at Carnegie-Mellon University where a program called “Traffic Jam” has been created. [7] Researches at Carnegie-Mellon have even begun to research ways that the program could identify corresponding images in different pictures that could allow law enforcement to link different pictures to the same hotel room or other location in order to assist in the fight against human trafficking. [8]

While Memex is not yet being released for use by all of law enforcement or other organizations, it has been released to specific organizations for test use with varying levels of success (up to and including actual convictions). [9] In the meantime, the creators of Memex are attempting to test the program at increasing levels each quarter while simultaneously attempting to avoid any potential legal pitfalls regarding government surveillance. [10] The questions raised by the creation of the program are many but here are a couple worthy of classroom discussion:

  1. What constitutional/legal problems, if any, do you believe that the use of Memex could encounter?
  2. Does the fact that, allegedly, thus far only information available to the public has been used by the Memex program to find potential human traffickers in test runs give you peace of mind concerning how that information can then be used to map out the activity of users suspected of criminal activity?
  3. Do you believe that a user accessing a sex advertisement multiple times should constitute probable cause? How about 100 different advertisements? 1 advertisement?


Another avenue through which human traffickers have found success is the use of standard online advertisement sites such as Craigslist and – with some estimates showing that has held as many as 70%-80% of all American prostitution advertising at some points in time. [11] While is the current focus of a great deal of the anti-human trafficking crowd’s ire, Craigslist was once public enemy number one in this regard. After a great deal of public outcry, the Sheriff of Cook County, Illinois (Thomas Dart) filed suit against Craigslist alleging that users routinely used the erotic services section of Craigslist in the Chicago area openly offering money in exchange for sex and claimed this to be a public nuisance as well as the facilitation of prostitution.[12] The sheriff hoped to recover the funds his department spent policing Craigslist-related prostitution, as well as both compensatory and punitive damages. [13] The end result of this action was a victory for Craigslist in the courtroom but the larger result of note is that Craigslist shut down its “erotic” services page shortly thereafter.[14] [15]

Many of the arguments put forth by Craigslist in its case against Thomas Dart have now been used by Carl Ferrer and in a series of legal actions in the states of Tennessee, Washington, and New Jersey (what an odd set of bedfellows) as well as a subpoena from a U.S. Senate committee regarding’s practices  [16][17] Principal among these are that he is protected by the First Amendment, that he cannot be held accountable for the posts of third parties on, and that, according to the Communications Decency Act, he cannot be liable for posts by third parties on [18]

Each of these arguments contains differing legitimate concerns for law enforcement, the general public, and First Amendment jurisprudence. First, Ferrer’s attorneys have argued that “The First Amendment bars the prosecution because imposing an obligation on publishers to review all speech to ensure that none is unlawful would severely chill free expression”. [19] The language used by his attorneys tracks that in Ashcroft v. ACLU but also, more importantly, in, LLC v. Cooper – one of the cases that has already won in which it prevented the State of Tennessee from enforcing a new law that would have criminalized’s role as the alleged intermediary between traffickers and customers. [20] The argument presented really strikes at two potential issues – forcing to review every single listing before it is published and the impact that would have on free speech. In other words, is arguing that it should not be forced to review every listing as that would be both unfeasible and unfair to since it is not creating the contents of the posts. [21] Additionally, doing so would chill free speech by not clearly stating what speech would be illegal and which would not and also by stalling listings as long as necessary for them to be reviewed. [22]

Intertwined with the last main argument presented by regarding the Communications Decency Act, is the second argument presented by – that it is not responsible for the posts of third parties on its site. This is because the federal Communications Decency Act, inter alia, mandates that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[23] If that isn’t bad enough already for law enforcement and states trying to cut into human trafficking, the law also states that “…no liability may be imposed under any State or local law that is inconsistent with this section.”[23]

This means that the Communications Decency Act, in layman’s terms, states that cannot be treated as the publisher of information on its site provided by another individual and that no state or locality can do anything contrary to this notion. Regardless of any individual’s personal views on human trafficking, the First Amendment, and so forth, the law is clearly stated, and three separate cases in the area involving have all demonstrated, that the pre-emption line of attack is very strong. [24]

There are, however, a couple ways around this. One is completely hopeful, and a hope I have personally embraced: amendment of the Communications Decency Act to confront the reality of online human trafficking and passage of a corresponding federal criminal statute directly related to online human trafficking. This would focus federal attention and resources in on human trafficking while allowing states to attack the problem as they see fit within themselves (as well as allowing for the full litigation of issues regarding what language gives the public proper notice of what conduct is illegal and which is not, experimentation in different states of different enforcement models, what terminology is not overly broad nor too vague, etc.)

Another is an avenue through which only federal prosecutors could operate: the CDA expressly states that the law is not intended to compromise the ability of the federal government to enforce federal criminal law. [25] Given the proper test case and thorough litigation, this language could be a useful tool for federal prosecutors moving forward should a federal statute clearly and constitutionally criminalize online human trafficking, but would be unlikely to help states and localities combat online human trafficking as the CDA would still preclude them from passing laws contrary to the spirit of the CDA.

Among all this is the crucial backdrop that all content-based restrictions on free speech are, of course, subject to strict scrutiny by the judiciary – which means that, generally, when challenges the enforcement of a new statute anywhere in the country – it will be with the benefit of strict scrutiny analysis of its allegations. [26] What is clear is that current efforts to curb human trafficking are simply not effective enough as some estimates concerning the number of individuals currently being trafficked range into the several millions, hundreds of thousands in the U.S. alone, the majority of whom have been sold online at least once. [27] Yet, we should always be careful not to restrict speech without great care, just cause, and clear articulation of exactly what speech is illegal.


  1. What do you make of each of Ferrer’s three main arguments? (Generally speaking that is, the strength of these arguments do of course vary based on circumstances – such as whether Ferrer is being subpoenaed or if is fighting a new statute)
  2. How would you approach the problem of curbing online human trafficking if you were a member of Congress while also not running afoul of First Amendment concerns?
  3. Should Ferrer be held liable for the posts of other individuals even if a means through which to prosecute him can be found given he is not the producer of the posts? Why or why not?






[11], [15], [17]

[12]-[14] Dart v. Craigslist, Inc., 665 F.Supp.2d 961 (N.D. Ill. 2009)

[16], [18]-[19]

[20]-[22], LLC v. Cooper

[23] 47 U.S.C. § 230(c)(1) (2012)

[24], LLC v. Cooper,, LLC v. McKenna,, LLC v. Hoffman

[25] 47 U.S.C. § 230(e)(1) (2012)

[26] United States v. Playboy Entm’t Grp., Inc., 529 U.S. 803, 818, 120 S.Ct. 1878, 146 L.Ed.2d 865 (2000).

Should DFS be Considered Gambling?

•November 6, 2016 • 9 Comments

Fantasy sports have become one of the most popular games in America over the past few decades. With the increase in technology and statistical tracking ability, fantasy sports have evolved from a hobby for only the most dedicated sports followers into the multi-million dollar industry that it is today. The basic concept of fantasy sports is that online fantasy players select real-world athletes based on their projected statistics in specified scoring categories to make up fantasy teams that will compete head to head. Depending on the fantasy sport, different statistical categories will be awarded different numeric point values to determine who wins the fantasy matchup. For example, in most fantasy football leagues a standard team is made up of one quarterback, two running backs, two wide receivers, one tight end, one kicker, one team defense, and what is called a flex player that can be either a running back, wide receiver, or a tight end. These different players earn points for the fantasy owner based on their actual performances in real-world games.

Today there are two main gameplay styles for fantasy sports: traditional and daily. The first is traditional fantasy sports (TFS) that run the full length of the season of the real life sport it is based on. TFS for the most part is considered a game of skill and is therefore legal and not considered to be gambling. The second is daily fantasy sports (DFS) that only one run for one day or one week depending on the sport the fantasy game is based on. The legality of DFS has been called into questions a number of times recently by a number of different states therefore DFS will be the main focus of this blog post.

Daily fantasy sports contests can vary in both entry cost and payout breakdown, depending on the type of daily fantasy sports contest. Some of the main types of daily fantasy sports contest offered on Draftkings and Fanduel are:

Guaranteed Prize Pools: Players pay a set entry fee to compete for a share of a fixed prize pool; GPPs run regardless of whether they fill up or not.

Cash Games”: Players can either join an existing league or create their own league, in which the best-performing fantasy teams win prizes. These are smaller than GPPs and not guaranteed to run.

Head-To-Head: A contest that pits two players against one another; the winner receives the entire prize pool.

50/50: The top half of the field nearly doubles their investment; the other half of the field receives nothing. [1]

DFS is a relatively recent addition to online fantasy sports. It was started in approximately 2007, one year after Congress essentially killed the online poker industry with the passage of the Unlawful Internet Gambling Enforcement Act (UIGEA), which contained a specific exception for online fantasy sports. [2] The language of the UIGEA exempts fantasy sports that meet certain conditions. One such condition is that, “all winning outcomes reflect the relative knowledge and skill of the participants and are determined predominantly by accumulated statistical results of the performance of individuals (athletes in the case of sports events) in multiple real-world sporting or other events.” Thus, in order to be exempt under the UIGEA, DFS must be based upon the skill of the fantasy participants; it must not be based upon chance. [3] Further, the UIGEA defines unlawful internet gambling as placing a bet using the internet where such bet is “unlawful under any applicable Federal or State law in the State or Tribal lands in which the bet or wager is initiated, received or otherwise made.”[4] Therefore, if a state, such as Florida, finds that daily fantasy sports constitutes illegal gambling under its state law, then it would be considered unlawful internet gambling under the UIGEA as well.

Under the common law, gambling is defined as those activities in which a person pays consideration for the opportunity to win a prize in a game of chance.  Because most games contain elements of skill and chance, this common law definition requires the court to determine whether a game is one of chance or skill.  “Skill has been defined as the exercise of sagacity upon known rules and fixed probabilities where sagacity includes, keenness of discernment or penetration with soundness of judgment; shrewdness; or the ability to see what is relevant and significant.”[5] Chance can be described as something that is not planned or designed. The participant in the game has absolutely no control over the elements of chance, which distinguishes them from the elements of skill. [6]

Recently the New York State’s attorney general filed lawsuits against two leading DFS companies, Fanduel and Draftkings, asking for an injunction to stop them from operating, saying that they are violating the prohibition on gambling that is part of the state’s constitution. In his complaint Mr. Schneiderman argued, “DFS is a new business model for online gambling. The DFS sites themselves collect wagers (styled as “fees”), set jackpot amounts, and directly profit from the betting on their platforms. DFS’ rules enable near-instant gratification to players, require no time commitment, and simplify game play, including by eliminating all long-term strategy.” [7] Mr. Schneiderman also argues that the instant gratification of DFS and ability to wager large sums of money could lead to gambling addiction. Due to this recent litigation the New York legislature has passed a bill recognizing DFS as a game of skill and creating different regulations for the DFS industry to follow to be considered legal in the state of New York. [8]

These regulations fall under three main categories: Registration and oversight for operators; Operator taxes; and Consumer protection. Under registration and oversight for operators the bill requires all paid entry fantasy sports operators wishing to do business in the state must apply with the New York State Gaming Commission. [9] Under operator taxes registered sites in the state must pay a 15% tax on gross revenues derived from New York players, plus an additional tax of 0.5% of revenues, with a $50,000 annual cap.[10] Under consumer protection sites must prevent play by minors. Also in advertisements and upon entry in a contest, operators must “make clear and conspicuous statements that are not inaccurate or misleading concerning the chances of winning and the number of winners.”[11] Lastly sites must ensure that player funds are segregated from operational funds. [12]


Questions for Class:

Do you think DFS should be considered a illegal gambling?

Do you think the UIGEA fantasy sports exemption should apply to DFS?

Do you agree with the New York Attorney General that DFS is a new business model for online gambling?

Do you think the New York legislation does enough to regulate DSF? If not what regulations would you add?



[3] 31 US Code Section 5362

[4] 31 US Code Section 5362









Hacking Tor: Is it justified to protect children?

•October 23, 2016 • 11 Comments

The Tor Network (“Tor”) which is also known as “The Dark Net” allows individuals to secretly browse the internet. Tor provides anonymity to the users which makes it nearly impossible for the government to trace which sites a user visits. While such network can be helpful for people in an oppressive government, studies found that it is more frequently used for criminal activity. For example, child pornographic images are one of the most requested materials. So how does the Tor work? The Tor uses an encryption tool called The Onion Router to route data through randomly assigned computers located around the world in order to mask the user’s actual internet protocol address (“IP address”). Those proxy servers that are located all over the world are called “nodes”. When a user logs into Tor, that user’s computer first pulls various nodes from the Tor server. Next, the computer will randomly choose a node in order to log into the network. This means the user is already using a different IP address before even entering Tor. Once the user is logs into Tor, the user’s information is send from one node to another and each node only communicates with the immediately preceding and following node. Each time the user’s information passes through another node it is encrypted with a new layer. The information will then exit through the last node in order to reach the target webpage. Once it exits the Tor, all encryptions are removed. To simplify it, if a person uses Computer A to access a certain site through the Tor, and Computer B is the exit node, the accessed webpage can only trace communications to and from Computer B. The webpage cannot trace any information back to Computer A.  [1]

Since the government is aware of the criminal activities through the Tor Network, it created “Trojan Horse Devices”. Such devices come in various forms such as data extraction software, port reader, or network investigative techniques. Since the cases that are currently revolving around this issue of the government hacking the Tor adapted the term network investigative techniques (“NIT”), I will focus on that device. The NIT is installed on the target website and every time a user visits the target website the NIT will send a communication to the user’s computer. That communication results in the user’s computer delivering locating data to another computer. In the most recent cases, the computer that received the information was operated by the FBI. Without the NIT, the government can only communicate with the exit computer. However, thanks to the NIT, the government can direct the communication back through the different nodes in the Tor to give commands to the user’s computer. The government has used the NIT to command the user’s built-in camera to take pictures of the user without the user’s knowledge and send those pictures back to the government controlled computers. In order to implement such NIT, the government needs a search warrant. [2]

The main issue in the present cases with the government’s use of the NIT was whether the search and the warrant violated the fourth amendment and the Federal Rules of Criminal Procedure. The fourth amendment gives the people the right to be secure in their persons, houses,…, and effect, against unreasonable searched and seizures. [3] A search occurs when the government either physically intrudes a person’s home or invades an area in which the individual reasonably expected privacy. A seizure occurs when the government interferes with a person’s property or possession in a meaningful way. Additionally, Federal Rules of Criminal Procedure 41 provides that the term property includes, among other things, intangible objects and information.  [4]

Additionally, Fed. R. Crim. P. 41(b) provides when a magistrate judge has the authority to issue a warrant. The following lays out when a warrant can be issued:

1) The magistrate judge has authority to issue a warrant in his or her own district and the person or property is also located within that district.

2) The person or property is within the magistrate’s district while issuing the warrant, but is moved outside the district before the warrant is executed.

3) In regard domestic or international terrorism, a magistrate can issue a warrant in any district.

4) The magistrate can issue a warrant to have a tracking device installed on the person or property located within the district that allows the person or property to be tracked within and/or outside the district

5) A warrant can be issued within a district where “activities related to the crime may have occurred”

The different cases dealing with the governments use of the NIT to locate individuals accessing and distributing child pornography all resulted from the same FBI operation. [5]. In 2014, the FBI started to investigate a webpage called Playpen. Playpen is notorious for hosting child pornographic material. The FBI was able to locate the administrators in Naples, Florida. However, instead of shutting down the webpage, the FBI decided to move the webpage to the Eastern District of Virginia and operate it for 12 days. [6]. During those 12 days, the government allowed thousands of child pornographic images to be downloaded. [7]

In February 2015, a magistrate judge in the Eastern District of Virginia issued the search warrant so the FBI could implement the NIT. Once the NIT was implemented, the FBI was able to locate the user’s computer, even those that were outside of the Eastern District of Virginia. [8]

The NIT enabled the FBI to seize the following information

1) The user’s actual IP address

2) A unique identifier created by the NIT that allowed the FBI to differentiate the data received from one IP address from others.

3) What type of operating system the user is running on his computer and the username of such operating system.

4) Whether the NIT has been sent to the same computer previously.

5) The computer’s host name

6) The user’s media access control address. [9]

This enabled the government to search computes all around the world. [10] Once the FBI had the location, they would request another search warrant from a magistrate located within the user’s jurisdiction to search that person’s home and computer. Page 7. The FBI’s search resulted in 137 individuals being arrested and prosecuted for accessing and distributing child pornography. Now, many defendants are challenging the FBI’s use of the NIT. [11]

Two of the individuals arrested were an Oklahoma resident named Artebury and a Massachusetts resident named Levin. Based on the findings of the NIT, an FBI agent respectively obtained a search warrant from a Judge in Oklahoma and Massachusetts which lead to the arrest of both defendants. In both cases, the defendant moved for a motion to suppress on the ground that the initial warrant issued by the magistrate in Virginia exceeds the magistrate’s authority under Fed. R. Crim. P. 41 and is thereby invalid. Both defendants argued that the subsequent warrants were direct result from the NIT warrant and; therefore, tainted and any and all evidence produced by the warrants should be suppressed. [12]

In Artebury, the government tried to argue that Rule 41(b)(1) authorized the magistrate to issue the warrant. The defendant sent electronic data to Eastern Virginia when accessing Playpen and that such data was the property seized. In contrast, the defendant argued that the FBI seized his computer with the first warrant because the NIT allowed the FBI to command defendant’s computer that was located in Oklahoma to give up confidential information. The Court sided with the defendant and held that if Artebury would have really send the information to Virginia, the FBI would have not needed a decryption tool to locate him. The court ruled that the Virginia Judge lacked authority and; therefore, the search warrant was invalid. [13]

In Levin, the government made the same argument and claimed that by accessing the webpage, the defendant accessed property located within the Virginia district. Again, the Court sided with the defendant. It pointed out that the warrant describes the place to be searched to be the computers that were accessing Playpen and that, in fact, most of such computers were located outside of the we magistrate’s district. Additionally, the Court stressed the fact that it does not matter where the server and Playpen, is located since the FBI was not searching the webpage but the individual computers. [14].

In both cases the Court ruled that the NIT warrant was not valid under current law. However, the government also argued that even if the NIT warrant was not valid, suppression of the seized evidence would not be an appropriate remedy and the good-faith rule applies.  Violations of Rule 41(b) can either be procedural or substantive. A procedural violation, for example, would be if the magistrate fails to indicate on the warrant during what time period the search is to be executed or the officer fails to leave a copy of the warrant at the defendant’s house. Such violations   do not give rise to suppression. Here, the Court found that the violation is not merely procedural but is substantive.

A substantive violation occurred because it “constituted a jurisdictional flaw”. The Virginia magistrate never even had jurisdiction to grant the NIT warrant in the first place so the search warrant was void by matter of law. [15]

Evidence gathered in connection with an invalid warrant should be suppressed if the defendant can show prejudice. In Artebury and Levin, both Court agree that if the government would have adhered to Rule 41, the search could have not occurred. Therefore, both defendant clearly suffered from prejudice and the evidence should be suppressed.

The government additionally argued that even if the NIT warrant was in violation of law, the good faith exception should hinder suppression of the evidence. The good faith exception states that evidence should not be suppressed if the officers gathering the evidence acted in good faith. In Levin, the Court focused on the experience of the FBI agent. The agent had 19 years of experience, and the Court concluded that the agent should have known that the warrant was void. [16]. The Artebury Court also found the good faith exception inapplicable because the violation of the rule can be directly traced to the judges’ lack of authority to even issue the warrant. [17].

Now after analyzing two cases that granted the Defendant’s motion to suppress evidence, it is important to point out that the Courts are split on the issue. Two judges, in Washington and Milwaukee, sided with the government and found the FBI had probable cause. The Washington court denied the defendant Michaud’s motion to suppress evidence and stated that the search was constitutional because it is not likely that a person stumbles accidentally on a site like Playpen and the violation of Rule 41 was merely technical. Therefore, the search was reasonable. [18]

In addition to whether the search was legal or not. The entire sting operation also raises an ethical dilemma. The FBI allowed the distribution of child pornography to continue for 12 days. Some argue that this is equivalent to a police officer selling the drugs to a person and then arresting such person. [19]

The cases are not only important for the accused defendants but for all of us since the legal questions raised in the cases already resulted in a push for a change of current laws.

The Court in Levin raised the argument that due to the advancement in technology a magistrate judges should have the authority to issue NIT warrants outside their own jurisdiction. Currently, there is a proposal of the Department of Justice under consideration that would grant a judge jurisdiction in such instances. [20]. The proposal will go into effect on December 1, 2016 unless Congress acts to prevent the change. [21]

The proposed Rule 41(b)(6) will grant a magistrate judge the authority to issue a warrant for the government to use remote access tools “in any district where activities related to the crime may have occurred” to access electronic storage media and seize electronic copy of the stored information. Such warrant can be granted if the media or information is located has been concealed through technological means, or in computer fraud cases, the device that contains the information has been damaged without authorization.

Supporters of the proposed change argue that this would allow for searches to be conducted where the computer to be searched is adequately described in the warrant but the location of the computer is unknown. Additionally, it would enable an investigation to take place when various computers in different districts have to be searched. [22].

I will leave you with a few question regarding the current cases and proposal to Rule 41:

1) Does the government’s goal to deter child pornography justify the means used by the FBI?

2) Is operating the Playpen page for 12 days similar to a police officer selling drugs to a person?

3) Do you believe that the Court that ruled in favor of the government really found the search reasonable or were they persuaded by the subject matter?

4) Does the proposed regulation of Rule 41(b)(6) violates the fourth amendment?

5) Is a violation of the right to privacy justified in order to protect children?

[1]U.S. v. Artebury, Case No.: 15-CR-182-JHP (R.R. 2016) at [Page 1-3]

[2] I.d. at page 4

[3] I.d. at page 8

[4] I.d. at page 9

[5] I.d. at page 11

[6] I.d. at page 6


[8] U.S. v. Artebury, at page 6

[9] U.S. v. Levin, Case No.: 15-10271-WGY (Memorandum & Order 2016) at [page 5 footnote 5]



[12] U.S. v. Artebury, U.S. v. Levin

[13] U.S. v Artebury

[14] U.S v. Levin, at page 11-14

[15] U.S v. Levin, at page 15- 19

[16] U.S v. Levin, at page 16

[17] U.S v. Artebury, at page 26



[20] U.S v. Levin, at page 20 (footnote 13)


[22] U.S v. Levin, at page 20-21 (footnote 13)