Human Trafficking: The un(intentional) effects of SESTA

•March 31, 2019 • 6 Comments

Human trafficking has been defined by the United Nations Office on Drugs and Crime (UNODC) as, “the recruitment, transportation, transfer, harboring or receipt of persons, by means of the threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power or of a position of vulnerability or of the giving or receiving of payments or benefits to achieve the consent of a person having control over another person, for the purpose of exploitation.”[1] With society being heavily influenced by technology today, traffickers can conduct almost all of their business online. As a result, the developing field of cyberlaw has become interwoven into the issue of human trafficking.

CDA 230

Section 230 of the Communications Decency Act of 1996 (CDA 230) is one of the most important reasons we have the internet that we do today. CDA 230 limits the liability of interactive computer service providers for content created by third-party users.[2]According to the CDA, section 230 was created to increase the public’s benefit of Internet services by restricting government interference in interactive media and protecting the free flow of expression online.[2]Without those protections, most online intermediaries would not exist in their current form; the risk of liability would simply be too high.[3]However, CDA 230 has become a point of tension regarding technology-facilitated trafficking. CDA 230 is seen by some as an avenue through which providers can be protected from liability as hosts of illegal content created by third parties.


The Stop Enabling Sex Traffickers Act (SESTA) along with a companion bill, Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) was prompted by a case involving, in which executives of the site were arrested on charges of pimping a minor, pimping, and conspiracy to commit pimping.[4] However courts dismissed the case based on Section 230 of the Communications Decency Act.[4] SESTA was then passed which would make it illegal for Backpage and similar websites to “knowingly assist, facilitate, or support sex trafficking.”[5]

SESTA in part provided that section 230 does not limit: “(1) a federal civil claim for conduct that constitutes sex trafficking, (2) a federal criminal charge for conduct that constitutes sex trafficking, or (3) a state criminal charge for conduct that promotes or facilitates prostitution in violation of [FOSTA].”[5]

SESTA requires Internet businesses to utilize automated filtering technologies on their websites in order to monitor the activity that is taking place. The automated filtering provides assistance in finding content that may need further review but that review must still be completed by the website’s creators. However, many Internet companies are unable to dedicate enough staff time to fully mitigate the risk of litigation under SESTA. Instead, they tune their automated filters to err on the side of extreme caution removing any mention of content that may be related to sex trafficking.[3] It would be a technical challenge to create a filter that removes sex trafficking advertisements but doesn’t also censor a victim of trafficking telling her story or trying to find help.[6]

Effects of SESTA

FOSTA and SESTA are anti-trafficking, pro-censorship bills on their face however, critics argue that both bills weaken internet freedoms. [6] The algorithms that are utilized fail to differentiate between Internet users talking about themselves and making statements about marginalized groups.[3] The over-censoring of content almost always results in some voices being silenced and the most marginalized voices in society can be the first to disappear. [4]

As a result of SESTA/FOSTA, Tumblr banned adult content on its website on December 17, 2018. Sex worker advocates argue that the bill does nothing to help sex-trafficking victims but it does make sex work a federal crime. [6] These banning policies marginalize sex workers by making it harder to safely conduct their business, report abuse and share safety resources that can help trafficking victims.[7] Many sex workers utilized online communities to also warn other sex workers about violent potential clients that they should avoid interacting with. Limiting these online communities drives the trafficking problem underground which results in an increase in violence. Studies show that violence against women decreases when online advertising is available to sex workers.[8] There has been little evidence to suggest that these bans of adult content have been effective in any way. However, they do make bad actors more difficult to find [9] Additionally, Backpage aided law enforcement in capturing traffickers and provided tips on criminal activity.[10]

Many do believe that very soon sex traffickers will learn what words or phrases trigger the filter and avoid them by using other words or phrases. [3] These platforms that are utilizing algorithms should carefully balance enforcing standards with respecting users’ right to express themselves without criminalization.

  1. Should websites be held accountable/liable for third party content?
  2. Would a balancing test be effective when discussing the use of algorithms? Which factors should be considered?
  3. Could SESTA have been created to criminalize sex work under the guise of anti-sex trafficking efforts?
  4. What amendments would you suggest to SESTA to ensure that it is achieving its purpose?





[2]47 U.S.C. § 230(a).









Are loot boxes the new slot machine?

•March 24, 2019 • 6 Comments

A loot box is essentially a virtual container that holds virtual items for the game they’ve been purchased through. You pay a few dollars and in return you are given a box with a random assortment of virtual items. Consumers are spending roughly $30 billion a year on loot boxes and these profits have made loot boxes an essential aspect of almost every new game.[1] This growing industry has caused concern among politicians and regulatory agencies because of their similarities to gambling, the psychology behind them, and the predatory practice of companies hoping to maximize profit.

Psychological Component

            Loot boxes certainly are not a new concept. Baseball card collectors have sought out the rare chase card from a pack and the enticement of the opportunity to get said chase card. Psychologist describe this enticement as “variable rate enforcement” which explains that “[t]he player is basically working for [a] reward by making a series of responses, but the rewards are delivered unpredictably.”[2] This excitement obtained by loot boxes can be tracked by observing the brain. As Dr. Luke Clark, director at the Center for Gambling at the University of British Columbia explains, “We know that the dopamine system, which is targeted by drugs of abuse, is also very interested in unpredictable rewards. Dopamine cells are most active when there is maximum uncertainty, and the dopamine system responds more to an uncertain reward than the same reward delivered on a predictable basis.” This leads to those that purchase loot boxes to try and obtain the rewards through the randomized process no matter whether it costs $10, $20, or even $500 to obtain the item they’re looking for.

           Gaming companies have even sought to incorporate the gambling aesthetics used by casinos. In many games when you open a loot box, a flurry of lights shoot across the screen and the rarity of the item is depicted for a brief second before you know the item you are about to receive, but it is all done in hopes to build the anticipation. Jeremy Craig, senior game designer for Overwatch, explains that this entire process is “all about building the anticipation.”[2] Other games, such as Counter-Strike: Global Offensive, even utilize a crate scroll perfectly mimicking a slot machine.[2] The concern about games adapting this style is the lasting influence it can lead on its younger audience. A recent study performed in the United Kingdom has attributed an increase in underage gambling to the exposure of loot boxes at early ages.[3]

Example of opening a loot box.

            Current Legal Issues in the United States

            Most relevant gambling laws are at the state level. “At a high-level, an oversimplified definition of gambling involves: staking something of value (consideration) for a chance to win something of value (a prize). If all three elements are present in an activity (prize, chance, and consideration), it may be gambling.” [4] A very simple analysis of loot boxes would show that they could fall into these definitions, but where it becomes complicated is the secondary market and the element of value. For example, many in the video game industry insist that loot boxes should not be considered gambling because the inability to cash out and sell the digital assets.[5] While this is true in some games, some of the most popular (CSGO, Playerunknown’s Battleground) allow for players to sell their items on a secondary market. These items can run for hundreds and thousands of dollars.

           The most obvious abuse of this happened a few years ago when two popular YouTube streamers created a gambling site where players could use the skins as chips essentially, a raffle would occur, and one player would win all the skins.[6] The Youtubers would advertise their gambling site to many underage participants and would not inform them of their ownership in the company. The Federal Trade Commission brought a complaint against them, however, this dealt exclusively with their deceptive practices to their audience and did not address whether it was gambling. [7] As of now, there does not seem to be a solid consensus whether a court would consider loot boxes gambling, but legislators have sought other avenues.

            In Hawaii, State representative Chris Lee has criticized the gaming industry for predatory practices and has proposed various legislation in hopes to curb the impact on young children. He introduced House Bill 2686 which would prohibit retailers from selling games that have a loot box system that has random rewards to anyone under 21 years of age.[8] Additionally, he proposed another bill (House Bill 2727), which would require “a prominent, easily legible, bright red label” to indicate that loot boxes in the game contain “gambling-like mechanisms which may be harmful or addictive.”[8] United States Senator Maggie Hassan has asked that the FTC to investigate loot boxes. [9] Senator Hassan explained her concern about the “close link” loot boxes have to gambling and the possible negative impact they can have on children. Hopefully the FTC investigation will determine whether children are being adequately protected and whether we need to adopt some sort of legislation, like many other countries have, to protect children.

            Legal Approaches in Other Countries

           Netherlands and Belgium have come down on the gaming industry the hardest. The Netherlands Gaming Authority (NGA) has held that “offering gamers ‘a chance’ with real money is prohibited without a license. They also believe that loot box is similar to a slot machine and roulette games which are considered gambling.”[10] Following the NGA’s investigation the Netherlands banned loot boxes and required that all games remove them. Belgium followed with the same decision a week after. China requires that game developers include the probability of obtaining a rare item in their loot box system.[4] All in all, it’s not clear what the best approach to loot boxes is. However, as the industry is seeing nothing but increases, what is certain is the necessity to further study loot boxes and their lasting impact on young children.


  1. Do you believe that Loot Boxes generally fall into the definition of gambling or are they more comparable to buying a pack of baseball cards?
  2. Is the secondary market necessary for it to be considered gambling? If there is no value for the player to obtain outside of their own enjoyment, could loot boxes still be considered gambling?
  3. Does limiting a player’s ability to trade their items as they wish to infringe upon the property rights of those who wish to participate in gambling markets like CSGOlottery? Or Does a EULA all concern?
  4. Does the more strict approach that The Netherlands or Belgium seem to be a better step or is China’s approach to ensure developers include the odds for each loot box?
  5. Is this simply an issue that parents need to concern themselves with, should regulatory agencies like the Entertainment Software Rating Board be left to the decision, or do we need legislation like that suggest by State Representative Chris Lee?











Ghost Guns

•March 15, 2019 • 6 Comments

3D-printed guns, or “Ghost Guns” are much more than a sci-fi fantasy, they have become a part of our reality. In 2012, a Texas based organization named Defense Distributed, posted Computer Aided Design files (CAD Files) to their website that allowed anyone with a 3D printer and an internet connection to manufacture their own, untraceable gun in their garage. The State Department, having the authority to regulate those who export firearms under 22 U.S.C. § 2778, found that Defense Distributed had violated the section by engaging in unregulated export of firearms data in violation of the statute, ordered the CAD Files be taken down. Defense Distributed filed suit in 2015, claiming that the State Department’s ruling was an unconstitutional prior restraint on speech, in violation of the First Amendment. After winning a few cases, the State Department settled under unusual circumstances, and decided to allow Defense Distributed to post the CAD Files online without restriction. In response, 8 states (MA, CT, NJ, PA, OR, MD, NY, WA) as well as the District of Columbia, sued the State Department arguing that the settlement violates the Administrative Procedure Act, as well as the Tenth Amendment.[1] The states were also able to secure an injunction prohibiting Defense Distributed from posting the plans until the suit is concluded.

Defense Distributed initially argued that the State Department’s enforcement of its’ regulation was an unconstitutional prior restraint. A prior restraint is some kind of rule/regulation that prohibits speech before the speech actually occurs. A prior restraint is only unconstitutional if it covers speech which is otherwise protected by the First Amendment. While internet postings are typically protected speech under the First Amendment, the Court has outlined certain kinds of speech that are not protected. Incitement, for example, is speech that goes beyond normal speech and calls for imminent lawless action/violence. While Defense Distributed doesn’t explicitly call for its users to download the plans and commit acts of violence, can they advance any other justification for posting plans to create untraceable firearms and firearm parts for anyone to download?

Defense Distributed did try to argue that their position was to advance an individual’s access to firearms in exercise of their Second Amendment rights. However, Second Amendment rights are not absolute. For example, a person must usually be 18 to purchase a long rifle, and 21 to purchase a handgun. Additionally, virtually all states require a person to obtain a license before they can carry a gun on them regularly. Even then, there exists restrictions for carrying firearms at schools, hospitals, government buildings, and places that serve alcohol as a primary source of their revenue. All firearms manufactured in the United States are required to have a trackable serial number printed on them, and, unless special circumstances exist, all sales are required to be recorded and reported.

In response to the Ghost Gun phenomenon, a few states have introduced amendments to existing gun laws to cover Gun laws. New York, for example, introduced an amendment criminalizing the manufacture and sharing of information necessary to create a Ghost Gun.[2] The amendment explicitly defines Ghost Guns and criminalizes their possession and manufacture. (if unregistered, without a serial number, and not made by a licensed gunsmith). Washington has introduced an amendment with a similar effect. Although the bill does not specifically contain the phrase “Ghost Gun”, an entire section dedicated to “Untraceable Firearms” and goes to great length to cover firearms without serial numbers registered with a federally licensed manufacturer.[3] The firearm further zeros in on Ghost Guns by creating an exception for antique firearms. New Jersey is another example of a state that has put forth an amendment addressing Ghost Guns. That amendment specifically criminalizes the purchase of firearms parts to, “illegally manufacture an untraceable firearm, known as Ghost Guns.”[4]

Comparing each of these statutes, it seems like the main concern of the states is the fact that these guns cannot be traced. When a person prints a gun in their garage, they are not printing serial numbers on their new firearms, nor can they print a valid serial number without knowing which numbers can already be attributed to other firearms. This is further evidenced by the Court rulings in the 2015 Defense Distributed case. The Court there weighed the interest that the states put forward (security, protection of citizens, ability to trace firearms) and the permanent harms that the states would suffer were these interests abridged, against the harms Defense Distributed would suffer to their First Amendment interests if they were allowed to continue posting the plans.

Legislation like this is not likely to be overruled, especially in light of the previously mentioned existing restrictions on a person’s right to bear arms. These statutes are narrowly tailored to address a specific problem. These statutes also attempt to bring a new form of firearms into compliance with existing registration laws. The bigger obstacle that I see regulators facing is getting these statutes passed in their states. Many states (although not those mentioned here) have a strong majority of people who believe in an absolute exercise of the Second Amendment. We likely won’t see restrictions in states like these any time soon. How effective can restrictions be if they aren’t applying to everyone?

Is a harms balancing test the right way to determine which rights matter more in a case like this? If so, is there ever a way to determine which rights are weighed heavier than others? When looking at the harm, should that test be whether the harm is temporary or permanent? What if the harm suffered by both sides is temporary, or both sides suffer a permanent harm?

Is the untraceable nature of Ghost Guns the biggest reason for concern? What about their creation and use by terrorist organizations, or those wishing to carry out mass shootings? We have seen that our current registration system does little to protect these occurrences anyway.

Where should gun groups like the NRA come down on this issue? On one hand, they have built their entire platform on the absolute protection of the Second Amendment. On the other hand, big gun manufacturers (Beretta, Smith & Wesson, Glock…) are the main source of revenue fueling the NRA. Ghost Guns have the potential to cut into their profits greatly, so their existence seems to present future financial concerns for these manufacturers. Will preservation of profits cause them to take a major pro-regulation stance on Ghost Guns?






•March 13, 2019 • Leave a Comment

While we were on break there were several news stories touching on issues we’ve discussed in class. Some of them may be relevant to your papers, but even if not, they are all interesting.  I’ve cobbled them together from across a number of sources.


Three men cop to $21 million vishing and smishing scheme

JavaScript infinite alert prank lands 13- year old Japanese girl in hot water


Over 8,000 marijuana convictions in San Francisco cleared thanks to computer algorithm


Ethereum, a 51% Attack and How to Change an Unchangeable Blockchain


NSA has shut down phone call record surveillance







The Future of Cyber-Aggression in an International Framework

•February 24, 2019 • 6 Comments

What is a Cyber-Attack?

            In order to full understand how states can defend against and deter cyber-attacks, we must understand exactly what a cyber-attack is. This is more than a technicality, this boundary must be defined to an extent where it is clear how an executive can defend his nation against attacks. [3].

            The definition of a cyber-attack is not a matter of consensus. The commonly accepted usage of the word includes criminal, espionage, and terrorist activities in addition to military ones.[3] The RAND Project AIR FORCE study defines cyber war as “ a campaign of cyber attacks launched by one entity against a State and its society, primarily but not exclusively for the purpose of affecting the targets state’s behavior.” [3]. While this definition limits the target of cyber aggression to being a state actor, it helps illustrate the multiple disparities in defining cyber-attacks.


            There are several issues that preclude the effective deterrence of state-sponsored cyber attacks. The first is that while there has been progress in creating a set of cyberspace norms, there are hardly any consequences for states that violate them [2].  The framework that has been developed to this point includes the application of international law to cyberspace, the acceptance of certain voluntary norms of state behavior, and the adoption of confidence and transparency building measures. [4]. As far as the establishment of norms have gone, none have made a bigger impact than that of inaction. The global community has not done an effective job of punishing and deterring bad actors in cyberspace. [4].

            It is true that deterrence is a complex issue. An effective framework involves a combination of strengthening defenses, establishing expectations for international actors, and publicly declaring a strong policy. [4]. Progress has been made in these areas in the global framework, but the biggest problem facing realistic enforcement lies with attribution.


            Attributing state-sponsored cyber-attacks to specific entities is difficult and the methodologies for doing so reliably are still evolving. The most exhaustive means to protect a company are typically too expensive for most companies and information about attacks on governments is usually classified [1]. Unlike the physical world, there are no indicators giving warning or location of a cyber-attack [4]. Moreover, there is no political standard for attributing an attack to a state, the evidence would have to require almost 100% certainty [4].

There are some ideas about how this can be remedied. Chris Painter, a former cyber diplomat at the US State Department believes that states need to speed up attribution of cyber attacks and create a credible response quickly, all as a part of a collective multilateral action plan [2]. Delays in attribution are due in part to the technical difficulties of gathering evidence and balancing the benefits of going public against the risk of compromising the sources and methods of intelligence gathering [2]. Painter argues that all of these cycles need to be shortened [2]. Painter calls for states to “name-shame” states after attacks, which can be an effective tool when used collectively. [2]. However, it is clear that this tool has its shortcomings when it comes to states like Russia and North Korea. These states would not be affected by “name-shaming” because these actors think their powers are enhanced by having their actions attributed to them. [2].

A more effective way to deter state cyber-attacks would be for states to use diplomatic, economic, and law enforcement sanctions on states following attacks. [2}. However, this needs to be done more regularly and timely for states to take these threats seriously. [2]. More example, the US government has had the power to impose cyber-sanctions since 2015 but has only used them twice. [2]. These infrequent sanctions are not enough to deter state actors from maliciously attacking states in order to influence their actions.


            In order to develop a more effective international framework for deterrence, a few responses need to be considered. First, measures taken against bad actors need to be more than symbolic, they must have the potential to change that actors behavior [4]. The relationships between the states obviously need to be considered, but potential escalation needs to be in the back of the mind of every decision-maker. This is particularly difficult because escalation paths aren’t clearly defined for events that originate in cyberspace [4].

            Collective action against a bad actor is almost always a better way to address the situation than to respond as one state. The key problem here is that of information sharing [4]. Every state will want to satisfy itself before taking the political step of attribution, and sharing sensitive information among states with different levels of capability to protect that information is a tough issue [4].

           State-sponsored cyber attacks are on the rise, and not just on government and military organizations. The Digital and Cyberspace policy program at the Council on Foreign Relations have aggregated nearly 300 state-attributed incidents half of which were public sector targets. And these are just the ones that are publicly available [1]. It is clear that much more attention needs to be given to cyber-attacks the international community. In the short-term, states need to make more visible and timely reactions to cyber-attacks to reinforce the notion that bad actors will have to pay dearly for this sort of behavior.


Cryptocurrency and the Rise of Ransomware Attacks

•February 18, 2019 • 6 Comments

Cryptocurrency is virtual currency that is not issued by a central authority or subject to government manipulation. [1] In this way, cryptocurrency can be compared to gold bars which people often buy as an investment with the hopes that they will increase in value. [1] While fiat currency, such as dollar bills, is issued by a central authority, is subject to government manipulation, and will not increase in value. [1]

Ransomware attacks are when cybercriminals encrypt victim’s files using data-encrypting malware and demand payment, usually in the form of cryptocurrency, as a means for victims to get their files back. [2] Ransomware attacks have been going on for quite some time. [2] In the past, before bitcoin and other cryptocurrencies were around, cybercriminals would use online payment methods such as PayPal or Western Union which were linked to a bank account leaving the cybercriminals vulnerable to discovery. [2] Cybercriminals even went so far as to use postal services to receive payment from their victims. [2] However, since the dawn of cryptocurrency the ransomware attacks are becoming more frequent possibly because of cybercriminals ability to remain anonymous and avoid law enforcement. [2]

The U.S. recently indicted two Iranian nationals, Faramarz Shahi Savandi and Mohammad Mehdi Shah Mansouri, for alleged ransomware attacks that had been going on for years and affected more than 200 victims. [5] The attackers demanded bitcoins which resulted in more than $6 million in ransom payments. [5] Once ransom money was paid, two other Iranian nationals allegedly converted the bitcoins into Iranian riyals. [5] This is not the first time that the U.S. has issued charges over a ransomware attack. [5] The U.S. also issued charges against a North Korean man for a ransomware attack that affected FedEx, Britain’s National Health Service, and others. [5]

Bitcoin is often the choice for cybercriminals when demanding payment from victims because it has a certain level of anonymity and can be easily purchased by victims for payment. [2] Cybercriminals try their best to remain anonymous when demanding bitcoin by using mixing services which are money laundering for cryptocurrencies. [2] Instead of making it easy for law enforcement to find the specific wallet that the victim’s payments are going to and potentially find out who is behind the attack, cybercriminals will take all the payments, mix them with tens of thousands of other wallets, and eventually get their ransom payments back after they have been mixed with other money. [2]

Unfortunately for cybercriminals, law enforcement has been taking advantage of the fact that bitcoin is not completely anonymous. [3] Law enforcement can use the blockchain which is where the transactions and addresses of bitcoin users are recorded, to track down these cybercriminals. [3] Sure mixing services exist for bitcoin and other cryptocurrencies, but usually the only ones using mixing services are those engaged in illegal activity, meaning as soon as you use a mixing service you’ve already raised a red flag. [3] Now unfortunately for law enforcement, cybercriminals are turning to a new type of cryptocurrency in their attempts to remain anonymous. [3]

Monero is a new type of cryptocurrency which launched in 2014 which provides new benefits for cybercriminals to remain anonymous. [3] Monero uses ring signatures to obscure the identity of senders and recipients. [3] Ring signatures combine a user’s account keys with public keys from monero’s blockchain to create a list of possible signatures, meaning that you cannot link one particular signature to a specific user. [3] Monero also uses stealth addresses which are randomly generated, one-time addresses created for each transaction on behalf of the recipient. [3] As mentioned earlier, mixing services are available for certain cryptocurrencies but when you use a mixing service it often raises a red flag. [3] With monero however, all of the coins used in transactions are always mixed so no red flags are raised. [3] Monero users also have the ability to selectively share their account transactions through a view key. [3] One downfall that monero faced was that they obscured the senders and recipients of transactions but not the amount of the transaction. [3] However, monero introduced RingCT that not only concealed the identity of the sender and recipient but also the amount of the transaction. [3] With the level of privacy that monero has, it offers fungibility. [3] Since monero transactions are untraceable, no two coins are different from one another. [3] With bitcoin however, the transaction history is recorded on the blockchain which means bitcoins associated with theft may be shunned by merchants and exchanges. [3]

Ransomware attacks may cause a lot of trouble and inconvenience but cybercriminals also partake in cryptojacking. [4] Cryptocurrencies are generated through a process known as mining. [1] Every cryptocurrency has a finite number of units that can be mined so the integrity of the cryptocurrency is not diluted. [1] To mine cryptocurrency however, you need a lot of processing power, so these cryptojackers look to large enterprises that have this processing power, one of which happened to be Tesla. [4] Cryptojackers found an administrative portal for cloud application management that was not password protected and went in with mining malware. [4] With Tesla already using so much electricity, the cryptojackers could have went unnoticed for quite some time if it wasn’t for RedLock who noticed Tesla’s open server being attacked by cryptomining. [4]

It seems that cryptocurrencies have made cybercriminal’s jobs easier since cryptocurrencies have ways to help users remain anonymous. Further, cryptocurrencies are only becoming more private, which does not help law enforcement in their search to uncover perpetrators of ransomware attacks. Cryptocurrencies may be helping make ransomware attacker’s jobs easier, but criminals will always find a way to get around safeguards to get what they want. With new cryptocurrencies constantly being introduced, it would not be surprising to see an increase in cryptojacking as well.

Questions to consider:

  • Would restrictions placed on mixing services help in the search for ransomware perpetrators? Would restrictions on mixing services be allowed?
  • Monero is gaining acceptance on the dark web. Is monero the new bitcoin for cybercriminals?
  • Anyone can partake in mining. Should cryptocurrencies make mining cryptocurrencies harder (require more processing power) or would that only add to value and appeal?






Algorithms in Criminal Justice – Biased Technology?

•February 9, 2019 • 6 Comments

As the interest in artificial intelligence has blossomed, so too has the idea of using artificial intelligence in the criminal justice context. Artificial intelligence has been used in various aspects of the criminal justice system such as algorithms to determine sentencing and risk assessments when setting bail. Although the use of artificial intelligence and algorithms are not new, government agencies are coming up with innovative ways to use it.

Cities across the country have been plagued with gun violence, which has left police departments, citizens, and the government searching for solutions. Notification of gunfire in cities where residents prefer not to notify law enforcement may help police respond more quickly in areas where police otherwise may not have been notified at all. ShotSpotter is an artificial intelligence technology that detects gunfire from sensors that are placed thirty feet in the air and under a mile apart.[1] ShotSpotter then filters the data through an algorithm and isolates the sound of gunfire and is able to locate gunfire as close as ten feet from where the shot occurred.[1]

ShotSpotter is not the first technology to be able to detect gunfire. The Naval Research Laboratory and Maryland Advanced Development Laboratory have worked on the detection of small arms gunfire using land-based systems and on detection of larger artillery using airborne-based systems for years.[2] The main detection mechanism that has been used to sense the muzzle flash is called Mid-Wave InfraRed (MWIR).[2] MWIR has been used for detection of small arms gunfire from enemy snipers as early as the 1960’s.[2]

By 1996, a program called VIPER was using MWIR to detect gunfire in tests performed on military bases.[2] During the VIPER program, the researchers fired over 15,000 rounds of ammunition from different types of small arms, with small arms being considered anything 50 caliber and under.[2] The tests showed that all of the small arms that were tested could be detected at and beyond the effective firing range of the gun.[2] With technological improvements came improvements in cameras, namely wide angle cameras. These improvements significantly improved the ratio between signal and clutter in the detection of gunfire which in turn caused a reduction in false alarms being recorded.[2]

By 2002, attention had shifted to airborne-based systems in Forward Eyes UAV’s to detect larger artillery in response to the Washington D.C. sniper attacks.[2] These airborne-based systems were designed to detect large artillery such as mortars, from a relatively low altitude and then provide the GPS coordinates of the shooting event back to a ground station.[2]

It was only a matter of time before technology like ShotSpotter made its way from military base tests to use by law enforcement. ShotSpotter is made by California company SST.[1] According to SST, ShotSpotter helps fill a gap in current data relating to gun violence because it overlooks shots fired to scare people, kill animals, and gun battles in which bystanders do not call the police.[1] According to SST, less than 20% of gunshots result in a 911 call.[1] This data is overlooked because the data generally comes from three main sources: 911 calls, mandatory reports hospitals file when they treat gunshot victims, and coroner reports on homicides or suicides by gunshot.[1]

In 2011, SST made ShotSpotter more affordable for small to mid-size cities when the company began using a cloud-hosting platform with the program.[1] As the cost went down, more police departments began using ShotSpotter to help detect gunfire. Today, approximately seventy cities across the country are using ShotSpotter.[1]

Despite its increase in use among police agencies around the country, ShotSpotter does not have all the bugs worked out. For example, the program has difficulty differentiating between the sounds of gunfire from the sounds of firecrackers or cars backfiring.[1]

Even more troublesome is the idea that police may use ShotSpotter selectively by arresting more people in some neighborhoods than in others.[1] There is little doubt that the criminal justice system disproportionately preys on people of color. In the last thirty years, the prison population has quadrupled, and of those incarcerated, 58% are black or Hispanic, despite the fact that these groups only make up about a quarter of the country’s total population.[3] Even more worrisome is that black people are sent to prison at ten times the rate for drug crimes even though white people use drugs five times more than blacks.[3]

As a result, it should not be surprising that these racial disparities also boil over into the artificial intelligence arena of the criminal justice system. Currently, one of the most common uses of artificial intelligence in the criminal justice system is with risk assessment tools. These risk assessment tools analyze data that may be correlated with future criminal activity and are used when imposing sentences, setting bail, and determining release.[3] Unfortunately, although not surprisingly, these artificial intelligence recidivism calculations correlate strongly with race.[3]

One study looked at over 7,000 risk scores that were assigned to people who were arrested in Broward County, FL in 2013 and 2014 using a program from a for-profit company called Northpointe.[4] The results demonstrated that the program wrongly flagged black individuals as future criminals almost twice the rate compared to white individuals.[4] Moreover, white individuals were wrongly labeled as “low-risk” more often than black individuals.[4] This disparity could not be explained by an individual’s prior crimes or the types of crimes the individual was arrested for.[4] In an independent statistical test that isolated the effect of race from other factors such as recidivism, criminal history, age, and gender, black individuals were still 77% more likely to be labeled at a higher risk of committing future violent crimes and 45% more likely to be predicted to commit any type of crime.[4] Northpointe, disagreed with these results.[4]

Northpointe’s risk assessment tools are the most widely used assessment tools in the country.[4] Although Northpointe does not publicly disclose the factors it uses to calculate a person’s risk score, the assessment program requires answers to 137 questions, none of which are related to race.[4] However, Northpointe’s program asks questions such as:

“Was one of your parents ever sent to jail/prison?”

“How many of your friends/acquaintances are taking drugs illegally?”

“How often did you get in fights while at school?”[4]

 It is unclear whether the biased results from artificial intelligence programs are due to the bias being rooted in the data that is used to train the algorithm in the first place, the humans that program the risk assessment algorithms, or some combination of the two.[3] Only a few studies have been done on these risk assessment tools. As a result, determining the cause of the bias and the accuracy and validity of these tools remains a mystery.[4] Researches examined nineteen risk assessment methodologies used in the U.S. and found that “validity had only been examined in one or two studies” and that “frequently those investigations were completed by the same people who developed the instrument.”[4] These findings raise the issue of bias in the studies themselves. With the validity and accuracy of these risk assessment programs questionable at best, it should be alarming that defendants rarely have an opportunity to challenge their risk assessment scores and this could create contemporary constitutional problems that courts need to be ready to address.[4]

A 2017 Harvard Political Review article suggests ways to decrease bias in artificial intelligence in the criminal justice arena. First, open data and algorithmic transparency should be emphasized.[3] This will make the data available to researchers who can investigate the validity and accuracy of these risk assessment programs.[3] Second, the U.S. government should increase its standards for the technology companies it contracts with or even build the technology internally.[4] For example, the government should run extensive simulations using fake information before a contractor is selected to ensure no discriminatory outcomes are present in the private companies algorithms.[4] Furthermore, there should be a special commission put in place to ensure the algorithms are not biased and encourage transparency with incentives through grant funding using a top-down approach to enforce standards.[4]

New York City is the first city to create a task force on automated decision systems.[5] The task force will recommend how each agency within the city should be held accountable for using algorithms to make important decisions.[5] AI Now proposed a framework for New York City’s task force that is based on Algorithmic Impact Assessments (AIAs).[5] There are four main goals of AIA’s:

The first goal deals with the public’s right to know about the algorithms being used in their communities and how they are used.[5] AI Now suggests publicly listing and describing the systems that are used to make important decisions that affect identifiable individuals or groups and that this information should include the purpose, reach, and potential public impact of the use of the algorithms.[5]

The second goal ensures accountability with the use of algorithmic systems by providing opportunities for external researchers to review, audit, and assess the systems being used to be able to detect potential problems.[5] AI Now suggests possibly even an independent, government-wide body that oversees the accessibility to researchers.[5]

The third goal is to increase expertise and capacity within public agency’s so that they are able to anticipate issues such as disparate impacts or due process violations on their own without relying on a third party to intervene.[5] To retain public trust, agencies must be experts on their own algorithmic systems.[5]

The forth goal ensures the public can respond to and dispute an agency’s approach to algorithmic accountability which will further instill public trust into government agencies.[5] Moreover, due process will be strengthened by offering the public the opportunity to work with agencies on the use of algorithmic systems before, during, and after the assessment.[5]

With ShotSpotter taking off before issues surrounding algorithms used in risk assessment are solved, it will be interesting to observe whether ShotSpotter has the same issues and if so, if it will be handled in the same way. Like with any new technology or artificial intelligence there are many questions to be considered.

  • Will ShotSpotter have the same bias issues as other forms of algorithmic technologies such as risk assessment tools? If so, how?Would the issue of bias be addressed in the method suggested for risk assessment bias or a different way? If not, how is ShotSpotter different?
  • What constitutional issues could be raised by the use of artificial intelligence such as ShotSpotter or risk assessment algorithms? How could these constitutional issues be overcome while still leaving a place for the use of artificial intelligence and algorithms in the criminal justice context?
  • Finally, what other goals should be addressed by a task force like the one created in New York? How would a task force like this be implemented?


Gaming Companies Aspire and Struggle to Enter Online Gambling

•September 6, 2018 • Leave a Comment

In 2012, the global gambling market was estimated to be worth $417 billion.  According to H2 Gambling Capital, only 8.1% of that $417b came from online or interactive gambling.[1]  In light of those numbers, the opportunity for enormous revenue growth by gaming companies via online gambling is obvious and has not gone unnoticed.  A prime example of a gaming company’s attempt and struggle to capture their share of the gambling market is Zynga.

But there is hope.

The recent ruling in Spry Fox, LLC v. LOLApps, Inc., shows the legal theory surrounding game copyright may be slowly expanding in a way that offers developers more protection for more parts of their work.[2]  Spry Fox is the maker of Triple Town, a popular match-three/village-building game. They are suing 6waves Lolapps, which cranked out the extremely similar Yeti Town after backing out of negotiations to make an iOS Triple Town port.[3] The games are practically identical from a basic gameplay and progression perspective, right down to the prices of analogous items in the in-game stores and similar language in explanatory dialogue boxes.[4] Yeti Town‘s main innovation seems to be small cosmetic differences—the enemy characters are changed from bears to yetis, the graphics are rendered in 3D polygons rather than 2D sprites, etc.


Although Spry Fox cannot copyright the basic rules and idea of Triple Town, the court noted that Spry Fox can claim copyright protection for things like “plot, theme, dialogue, mood, setting, pace, and character” (the court compared games to movie screenplays in this regard. And while 6waves’ Yeti Town didn’t precisely copy any of these elements from Triple Town, the court found the similarities in these areas were great enough to let the case go forward. The court noted: “A writer who appropriates the plot of Gone with the Wind cannot avoid copyright infringement by naming its male protagonist “Brett Cutler” and making him an Alaskan gold miner instead of a southern gentleman. The differences between Triple Town and Yeti Town are more meaningful, but it is at least plausible that they are insufficient to overcome the similarities.”[5]

So how does this impact the digital world? Hopefully, video game copyright owners will soon receive more protection against copycat developers. Since courts seem to be getting more familiar with disputes involving video games. This was not the case in 2007. Then the big question was: What happens when one avatar tries to sue another avatar for copyright infringement in an actual court? Kevin Alderman, known in Second Life as Stroker Serpentine, one of SL’s leading entrepreneurs tried to do just that. He believed Volkov Catteneo Catteno was selling unauthorized copies of his SexGen bed, a piece of furniture with special embedded animations that enable players to more or less recreate an adult film with their avatars.[6] Alderman sold his version for the L$ equivalent of USD$45, while Catteno sold his alleged knockoff for a third that price, undercutting him. Alderman threatened to sue, but he had one small issue: He didn’t know who to sue, since he didn’t know the real life identity of the person behind the avatar. Maybe he would have better luck in today’s courts.

How does this affect Machinima production? I will admit, yesterday I’d never heard of machinima. Even after reading the materials, I was still clueless. Now, after watching a few videos on, I understand the concept. Machinima has become increasingly popular, not just among video game fans, but among independent artists in general, for its low cost and time efficiency relative to live action film or other forms of computer animation. According to, the target group is males aged 18-34; this could be why I didn’t know about it.

For those like me who are also clueless, “the word ‘machinima’ is a of ‘machine’ and ‘cinema’ and refers to the process of creating real-time animation by manipulating a video game’s engine and assets.[7] Essentially, it is filmmaking using the computer-generated images of a video game. The three-dimensional physics engines of modern video games provides computer animation in real-time, without the need for time-intensive rendering. Screen capture technology, available in most video games, allows a user to record the action as various players control characters in the game. Then, voice-overs are recorded independently and layered onto the visual recording.[8]

Machinima video will be considered an infringing derivative work of the particular video game used in production.[9] Most examples of machinima incorporate graphics (known as art assets) directly from the video game, which would qualify as infringement. While video game publishers may be reluctant to sue fans that distribute machinima videos for free, commercial machinima works are more liable to face legal challenges from copyright holders. Nevertheless, video game copyright owners would benefit from granting licenses to machinima producers since a it could serve as an effective marketing device for the video game title, and also build brand loyalty.

The bottom line: even though machinima productions may infringe upon copyrighted video games, these legal issues are not likely to impede the development of the genre as a whole. On the other hand, holders of video game copyrights have strong incentives to license their intellectual property in order to encourage this art form.[10]

Social Media and Law Enforcement

•April 14, 2018 • 14 Comments

The privacy individuals enjoy at home and in private has not been extended to our social media presence and law enforcement organizations have begun using this information to investigate, corroborate and prosecute individuals. The internet is currently the wild west for law enforcement and they are using social media to slowly corrode the Fourth Amendment rights guaranteed by the constitution. The Fourth Amendment protects American citizens against unreasonable searches and seizures, yet the current law enforcement practice has only been slowly analyzed by the courts. The rights of individuals are slowly encroached upon by law enforcement officials until courts step in to state that individuals have rights. Social media offers them glimpses into the lives of the accused at the simple click of a button. This blog post will focus on the ongoing use of social media by law enforcement to investigate and surveil individuals to fight crime and whether the use of social media may be overstepping into citizens Fourth Amendment rights

U.S. v. Blake

On cases involving computer warrants there seem to be an evolving point of view as to what the police may have access to when executing a search warrant.[1] The U.S. Court of Appeals for the 11th Circuit seems to be leaning toward placing a limit upon use of social media and email to prosecute individuals. In Unites States v. Blake, the defendants challenged the way that the government obtained the corroborating information that they were running a prostitution ring. The Court expressed some concerns over the use of the warrant to search the defendant’s email and Facebook account.

In Blake, the FBI arrested and charged the appellants, Dontavious Blake and Tara Jo Moore with crimes related to sex trafficking. The FBI managed to obtain warrants for Moore’s Facebook and Microsoft accounts. The Facebook warrants were not limited to specific data or to a specific timeframe. The Microsoft warrant was limited to emails linked to the charges against appellant.

The Eleventh Circuit Court stated that the search of the Microsoft account was lawful because it was limited in scope, the search of the emails to be turned over to law enforcement was limited to those that could contain potential evidence. However, the court did make a point that the Microsoft warrant not having a time period limit as to when the conspiracy was occurring was an overreach, but let it stand as the search was decently limited in scope to those that could be connected to the alleged crimes.[2] On the other hand the Facebook search was a clear overreach according to the court because law enforcement received all of the content of the account regardless of whether it was related to the alleged crime or not. The Court found that with regards to private messages contained within the social media account the search should have been limited to messages sent to or from persons suspected at that time of being prostitutes or customers.[3] Nonetheless the Court found that even though the warrants were overly broad they were supported by probable cause and the “good-faith” exception.

Law Enforcement use of Social Media

Law enforcement has begun using social media to monitor individuals, even those with no criminal activity or suspicion thereof on their record.[4] A 2014 survey of more than 1,200 federal, state, and local law enforcement professionals found that approximately 80 percent used social media platforms as intelligence gathering tools.[5]

The issue with law enforcement use of social media has raised several questions particularly in areas where law enforcement has used social media to monitor peaceful protests,[6] assembled social media activity as evidence for criminal conspiracy charges,[7] or created fake profiles or impersonated individuals online.[8] Law enforcement use of social media in these ways has raised some longstanding concerns over a potentially disproportionate law enforcement focus on people of color, religious minorities and low-income communities. The use of social media has stoked fears of how its use may affect both the First Amendment, which protects free speech, and the Fourth Amendment, which protects against unreasonable searches and seizures.

In 2014, law enforcement used social media to crack down on the Heartless Felons gang after a rapper affiliated with the gang posted videos on social media where he admitted to selling drugs.[9] Law enforcement used the videos and raps to corroborate other evidence and crack down on members of the gang. While this may be a positive situation the overextension of surveillance is a slippery slope where courts have no bright line rule.

On the other hand of adequate effects there are situations like Ferguson where law enforcement used social media to monitor peaceful protests.[10] Documents released by the Department of Homeland Security’s Office of Operation of Coordination indicated that the department frequently collects information, including data, on Black Lives Matter activities from social media accounts. The surveillance of peaceful protests by the federal government is a dangerous road that has been traveled before, most notably by the programs ran by the FBI against civil rights movements. It would be easy to say that law enforcement would never again indulge in such activities, but that is not something that should be left to chance and courts should act to curb law enforcement use of social media for such purposes.

Face Morphing Technology

The evolution of face morphing technology allows individuals to superimpose facial features into preexisting video with relatively little effort.[11] The ability to know place people in situations they may never have been in creates issues when law enforcement use social media posts to investigate individuals. The biggest issue with face morphing is placing celebrities in pornographic scenes in which they were never involved.[12] However it is not hard to see this technology being used to frame individuals whether it be by law enforcement or other individuals. Facebook uses facial recognition software that identifies the individuals in pictures or videos.[13]

The scenario where an innocent individual may be superimposed into a video is not farfetched anymore. Celebrities find themselves fighting fake celebrity porn, but it is not hard to imagine law enforcement basing their investigation on fake images.[14] The danger of law enforcement using these fake images is more dangerous especially with the fast spread of social media and the everyday use by individuals.

Questions for Discussion

  • Do you think the court in United States v. Blake was correct in allowing the search and seizure of defendant’s accounts even though the search went beyond the scope of the warrant?
  • Do you think law enforcement should be able to use social media to monitor the accounts of individuals who have not been accused of any crime?
  • Do you think face morphing technology will make video evidence less trustworthy in the coming future?



[2]See United States. v. Blake, No. 15-13395 (11th Cir. 2017)

[3] Id. at 21

[4] Alexandra Mateescu et al., Social Media Surveillance and Law Enforcement Data & Civil Rights (2015), (last visited Mar 29, 2018).

[5] Id.

[6] George Joseph, Exclusive: Feds Regularly Monitored Black Lives Matter Since Ferguson The Intercept (2015), (last visited Mar 29, 2018).

[7] Meredith Broussard, When Cops Check Facebook The Atlantic (2015), (last visited Mar 29, 2018).

[8] Jacob Gershman, Police Online Impersonations Raise Concerns The Wall Street Journal (2015), (last visited Mar 29, 2018).

[9] James F. McCarty, Police arrest dozens of West Side Cleveland gang members accused of waging reign of terror (2014), (last visited Mar 29, 2018).

[10] Joseph, Supra note 45

[11] Chang, James. “Deepfakes, Privacy Rights and the AI-Powered Blurring of the Lines.” Internet & Social Media Law Blog, 14 Feb. 2018,

[12] Id.

[13] Constine, Josh. “Facebook’s Facial Recognition Now Finds Photos You’re Untagged In.” TechCrunch, TechCrunch, 19 Dec. 2017,

[14] Palmer, Annie. “Reddit User Who Revealed Disturbing AI That Can Make Fake Porn Videos Using Celebrities’ Faces Has Now Launched an App so ANYONE Can Do It.” Daily Mail Online, Associated Newspapers, 25 Jan. 2018,

Online Gambling Serial Blog – Regulations Dealing with Online Promotions to Children (Blog Post 7 of 7)

•April 14, 2018 • 9 Comments

Since online gambling in the United States has caused many legal uncertainties, including how individual states have dealt with various online gambling issues and how the advancements in internet-based technologies have created uncertainty enforcing online gambling issues, it is necessary for there to be more clear rules and regulations regarding legal issues relating to online gambling in the United States. This blog will discuss various online gambling issues in a seven-part serial blog. The seventh blog post will step away from online gambling issues and focus in on how regulators are dealing with advertisements and promotions directed towards children.

How Americans, especially children, consume media has changed dramatically in recent years. The regulatory framework for advertising to children, however, has not changed very much since the 1990s. [1] Mobile devices, such as smart phone and tablets, are a major platform for reaching young people because children tend to be avid users of these devices. A Nielsen survey found that if these devices are available in a household, 69% of children aged 8 to 10 use them. [1] Another study found that children prefer watching and spend more time viewing video on hand-held devices than on television. [1]

Many children watch YouTube even though YouTube’s terms of service explicitly state it “is not intended for children under 13. If you are under 13 years of age, then please do not use the Service.” [1] A survey done in 2014 found that 66% of children aged 6 to 12 visit YouTube daily, including 72% of 6 to 8-year-olds. [1] Much of the content initially available on YouTube consisted of “user-generated” videos produced by amateurs. Typical examples include videos of cats, cute babies, and people playing video games. Over time, many of these video creators built up a large following. They have come to be known as “YouTube celebrities” or “influencers.” [3] Young people especially tend to follow YouTube celebrities more than traditional celebrities. [4] Brands collaborate with influencers because “they certainly know how to grab the power of social media and use their credibility to affect their followers’ views (and even their purchasing decisions). [5] Influencer marketing works because “[p]eople value influencers for their authenticity, as their endorsement matters to them and this helps a brand increase its human element of the wider marketing strategy.” [5]

The Federal Communications Commission (“FCC”) and the FTC are the two government agencies primarily responsible for regulating advertising to children. However, each agency enforces different laws, has different means of developing policies and rules, and uses different enforcement methods. [1] The regulatory efforts of the FCC and the FTC are supplemented, to a limited extent, by industry self-regulation. [1]

FCC rules apply only to television delivered by means of broadcast, cable, and satellite television. They do not apply to motion pictures (even if shown on television), video games, or online videos. [1] The FCC’s children’s advertising policies have not changed much since 1974. [1] Congress passed the Children’s Online Privacy Protection Act of 1998 (“COPPA”). [1] A major purpose of COPPA was to limit advertising targeted to children by prohibiting the collection, use, and dissemination of personal information from children without informed, advance parental consent. [1] Section 5 of the Federal Trade Commission Act gives the FTC the power to prevent deceptive and unfair marketing practices in interstate commerce, regardless of the medium employed.  [2]

Using influencer videos that do not appear to be advertising takes unfair advantage of children’s cognitive inability to appreciate the nature and purpose of advertising. [1] Because children naturally love toys and characters, market forces cannot be relied on to protect children from excessive, deceptive, or unfair advertising. [1]

Currently, Google facilitates influencer marketing on YouTube in several ways. The YouTube Partners Program allows creators to monetize content on YouTube by letting Google stream advertisements in exchange for a portion of the advertising revenue. [6] Recently, YouTube has been accused of violating child protection laws in the US, by a collection of 23 consumers, child safety and privacy advocacy groups. [7] The coalition has filed a complaint with the FTC alleging that YouTube collects data from children aged under 13. [7] The group alleges that YouTube collects location data and the browsing habits of its users – even if they are children – and uses it to target advertising. [7] Google said its advertiser tools did not include the option to target advertisements at under-13s. It also said it offered the YouTube Kids app “specifically designed for children”. [7]

New legislation will be required to protect children from excessive and deceptive marketing practices in the digital environment. That legislation should provide ample legal authority, resources, and the political support for the FCC, FTC, or perhaps some other agency, to develop new rules and enforce them across all platforms.

Academics who study marketing to children are finding that existing regulations are ineffective in a digital environment and have called on policy makers to take action. For example, one study concluded that the “nature of contemporary advertising demands a radical revision of our conceptualization of ‘fair’ marketing to children,” and urged policy makers “to reconsider policies and regulations concerning child-directed advertising.” [1]

Do you think YouTube should be held responsible for advertisements placed in front of viewers under the age of 13 when a lot of the content on YouTube seems to be directed toward younger audiences? What potential legislation do you think should be in place to limit the effects of “influencers” on susceptible children? These are some interesting questions that have to be answered as the digital age has greatly changed the way younger Americans are influenced by advertisements on a daily basis.


[2] Federal Trade Commission Act, 15 U.S.C. §45(a)(1)


[4] pewdiepie-1201544882/