Human Trafficking: The un(intentional) effects of SESTA

•March 31, 2019 • 6 Comments

Human trafficking has been defined by the United Nations Office on Drugs and Crime (UNODC) as, “the recruitment, transportation, transfer, harboring or receipt of persons, by means of the threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power or of a position of vulnerability or of the giving or receiving of payments or benefits to achieve the consent of a person having control over another person, for the purpose of exploitation.”[1] With society being heavily influenced by technology today, traffickers can conduct almost all of their business online. As a result, the developing field of cyberlaw has become interwoven into the issue of human trafficking.

CDA 230

Section 230 of the Communications Decency Act of 1996 (CDA 230) is one of the most important reasons we have the internet that we do today. CDA 230 limits the liability of interactive computer service providers for content created by third-party users.[2]According to the CDA, section 230 was created to increase the public’s benefit of Internet services by restricting government interference in interactive media and protecting the free flow of expression online.[2]Without those protections, most online intermediaries would not exist in their current form; the risk of liability would simply be too high.[3]However, CDA 230 has become a point of tension regarding technology-facilitated trafficking. CDA 230 is seen by some as an avenue through which providers can be protected from liability as hosts of illegal content created by third parties.

SESTA/FOSTA

The Stop Enabling Sex Traffickers Act (SESTA) along with a companion bill, Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) was prompted by a case involving Backpage.com, in which executives of the site were arrested on charges of pimping a minor, pimping, and conspiracy to commit pimping.[4] However courts dismissed the case based on Section 230 of the Communications Decency Act.[4] SESTA was then passed which would make it illegal for Backpage and similar websites to “knowingly assist, facilitate, or support sex trafficking.”[5]

SESTA in part provided that section 230 does not limit: “(1) a federal civil claim for conduct that constitutes sex trafficking, (2) a federal criminal charge for conduct that constitutes sex trafficking, or (3) a state criminal charge for conduct that promotes or facilitates prostitution in violation of [FOSTA].”[5]

SESTA requires Internet businesses to utilize automated filtering technologies on their websites in order to monitor the activity that is taking place. The automated filtering provides assistance in finding content that may need further review but that review must still be completed by the website’s creators. However, many Internet companies are unable to dedicate enough staff time to fully mitigate the risk of litigation under SESTA. Instead, they tune their automated filters to err on the side of extreme caution removing any mention of content that may be related to sex trafficking.[3] It would be a technical challenge to create a filter that removes sex trafficking advertisements but doesn’t also censor a victim of trafficking telling her story or trying to find help.[6]

Effects of SESTA

FOSTA and SESTA are anti-trafficking, pro-censorship bills on their face however, critics argue that both bills weaken internet freedoms. [6] The algorithms that are utilized fail to differentiate between Internet users talking about themselves and making statements about marginalized groups.[3] The over-censoring of content almost always results in some voices being silenced and the most marginalized voices in society can be the first to disappear. [4]

As a result of SESTA/FOSTA, Tumblr banned adult content on its website on December 17, 2018. Sex worker advocates argue that the bill does nothing to help sex-trafficking victims but it does make sex work a federal crime. [6] These banning policies marginalize sex workers by making it harder to safely conduct their business, report abuse and share safety resources that can help trafficking victims.[7] Many sex workers utilized online communities to also warn other sex workers about violent potential clients that they should avoid interacting with. Limiting these online communities drives the trafficking problem underground which results in an increase in violence. Studies show that violence against women decreases when online advertising is available to sex workers.[8] There has been little evidence to suggest that these bans of adult content have been effective in any way. However, they do make bad actors more difficult to find [9] Additionally, Backpage aided law enforcement in capturing traffickers and provided tips on criminal activity.[10]

Many do believe that very soon sex traffickers will learn what words or phrases trigger the filter and avoid them by using other words or phrases. [3] These platforms that are utilizing algorithms should carefully balance enforcing standards with respecting users’ right to express themselves without criminalization.

  1. Should websites be held accountable/liable for third party content?
  2. Would a balancing test be effective when discussing the use of algorithms? Which factors should be considered?
  3. Could SESTA have been created to criminalize sex work under the guise of anti-sex trafficking efforts?
  4. What amendments would you suggest to SESTA to ensure that it is achieving its purpose?

 

 

 

[1]https://www.unodc.org/documents/middleeastandnorthafrica/organised-crime/UNITED_NATIONS_CONVENTION_AGAINST_TRANSNATIONAL_ORGANIZED_CRIME_AND_THE_PROTOCOLS_THERETO.pdf

[2]47 U.S.C. § 230(a).

[3]https://www.eff.org/deeplinks/2017/09/stop-sesta-whose-voices-will-sesta-silence

[4]https://gizmodo.com/ceo-of-americas-second-largest-classifieds-site-arreste-1787509049

[5]https://www.congress.gov/bill/115th-congress/senate-bill/1693

[5]https://theslot.jezebel.com/house-passes-online-sex-trafficking-bill-that-critics-s-1823374389

 

[6]https://www.huffpost.com/entry/opinion-tumblr-ban-sex-work-porn_n_5c09af57e4b04046345a86b1

[7]http://gregoryjdeangelo.com/workingpapers/Craigslist5.0.pdf

[8]https://motherboard.vice.com/en_us/article/bjpqvz/fosta-sesta-sex-work-and-trafficking\\

Advertisements

Are loot boxes the new slot machine?

•March 24, 2019 • 6 Comments

A loot box is essentially a virtual container that holds virtual items for the game they’ve been purchased through. You pay a few dollars and in return you are given a box with a random assortment of virtual items. Consumers are spending roughly $30 billion a year on loot boxes and these profits have made loot boxes an essential aspect of almost every new game.[1] This growing industry has caused concern among politicians and regulatory agencies because of their similarities to gambling, the psychology behind them, and the predatory practice of companies hoping to maximize profit.

Psychological Component

            Loot boxes certainly are not a new concept. Baseball card collectors have sought out the rare chase card from a pack and the enticement of the opportunity to get said chase card. Psychologist describe this enticement as “variable rate enforcement” which explains that “[t]he player is basically working for [a] reward by making a series of responses, but the rewards are delivered unpredictably.”[2] This excitement obtained by loot boxes can be tracked by observing the brain. As Dr. Luke Clark, director at the Center for Gambling at the University of British Columbia explains, “We know that the dopamine system, which is targeted by drugs of abuse, is also very interested in unpredictable rewards. Dopamine cells are most active when there is maximum uncertainty, and the dopamine system responds more to an uncertain reward than the same reward delivered on a predictable basis.” This leads to those that purchase loot boxes to try and obtain the rewards through the randomized process no matter whether it costs $10, $20, or even $500 to obtain the item they’re looking for.

           Gaming companies have even sought to incorporate the gambling aesthetics used by casinos. In many games when you open a loot box, a flurry of lights shoot across the screen and the rarity of the item is depicted for a brief second before you know the item you are about to receive, but it is all done in hopes to build the anticipation. Jeremy Craig, senior game designer for Overwatch, explains that this entire process is “all about building the anticipation.”[2] Other games, such as Counter-Strike: Global Offensive, even utilize a crate scroll perfectly mimicking a slot machine.[2] The concern about games adapting this style is the lasting influence it can lead on its younger audience. A recent study performed in the United Kingdom has attributed an increase in underage gambling to the exposure of loot boxes at early ages.[3]

Example of opening a loot box.

            Current Legal Issues in the United States

            Most relevant gambling laws are at the state level. “At a high-level, an oversimplified definition of gambling involves: staking something of value (consideration) for a chance to win something of value (a prize). If all three elements are present in an activity (prize, chance, and consideration), it may be gambling.” [4] A very simple analysis of loot boxes would show that they could fall into these definitions, but where it becomes complicated is the secondary market and the element of value. For example, many in the video game industry insist that loot boxes should not be considered gambling because the inability to cash out and sell the digital assets.[5] While this is true in some games, some of the most popular (CSGO, Playerunknown’s Battleground) allow for players to sell their items on a secondary market. These items can run for hundreds and thousands of dollars.

           The most obvious abuse of this happened a few years ago when two popular YouTube streamers created a gambling site where players could use the skins as chips essentially, a raffle would occur, and one player would win all the skins.[6] The Youtubers would advertise their gambling site to many underage participants and would not inform them of their ownership in the company. The Federal Trade Commission brought a complaint against them, however, this dealt exclusively with their deceptive practices to their audience and did not address whether it was gambling. [7] As of now, there does not seem to be a solid consensus whether a court would consider loot boxes gambling, but legislators have sought other avenues.

            In Hawaii, State representative Chris Lee has criticized the gaming industry for predatory practices and has proposed various legislation in hopes to curb the impact on young children. He introduced House Bill 2686 which would prohibit retailers from selling games that have a loot box system that has random rewards to anyone under 21 years of age.[8] Additionally, he proposed another bill (House Bill 2727), which would require “a prominent, easily legible, bright red label” to indicate that loot boxes in the game contain “gambling-like mechanisms which may be harmful or addictive.”[8] United States Senator Maggie Hassan has asked that the FTC to investigate loot boxes. [9] Senator Hassan explained her concern about the “close link” loot boxes have to gambling and the possible negative impact they can have on children. Hopefully the FTC investigation will determine whether children are being adequately protected and whether we need to adopt some sort of legislation, like many other countries have, to protect children.

            Legal Approaches in Other Countries

           Netherlands and Belgium have come down on the gaming industry the hardest. The Netherlands Gaming Authority (NGA) has held that “offering gamers ‘a chance’ with real money is prohibited without a license. They also believe that loot box is similar to a slot machine and roulette games which are considered gambling.”[10] Following the NGA’s investigation the Netherlands banned loot boxes and required that all games remove them. Belgium followed with the same decision a week after. China requires that game developers include the probability of obtaining a rare item in their loot box system.[4] All in all, it’s not clear what the best approach to loot boxes is. However, as the industry is seeing nothing but increases, what is certain is the necessity to further study loot boxes and their lasting impact on young children.

            Questions

  1. Do you believe that Loot Boxes generally fall into the definition of gambling or are they more comparable to buying a pack of baseball cards?
  2. Is the secondary market necessary for it to be considered gambling? If there is no value for the player to obtain outside of their own enjoyment, could loot boxes still be considered gambling?
  3. Does limiting a player’s ability to trade their items as they wish to infringe upon the property rights of those who wish to participate in gambling markets like CSGOlottery? Or Does a EULA all concern?
  4. Does the more strict approach that The Netherlands or Belgium seem to be a better step or is China’s approach to ensure developers include the odds for each loot box?
  5. Is this simply an issue that parents need to concern themselves with, should regulatory agencies like the Entertainment Software Rating Board be left to the decision, or do we need legislation like that suggest by State Representative Chris Lee?

[1] https://www.theverge.com/2019/2/19/18226852/loot-boxes-gaming-regulation-gambling-free-to-play

[2] https://www.pcgamer.com/behind-the-addictive-psychology-and-seductive-art-of-loot-boxes/[[

[3] https://www.newsweek.com/child-gamblers-loot-boxes-gambling-gaming-ban-illegal-underage-1226841

[4] https://www.lawofthelevel.com/2017/10/articles/gaming/loot-boxes-illegal-gambling-mechanic/

[5] https://newtech.law/en/are-loot-boxes-a-type-of-gambling/

[6] https://arstechnica.com/gaming/2017/09/youtubers-escape-fines-for-promoting-their-own-csgo-gambling-site/

[7] https://www.ftc.gov/system/files/documents/cases/1623184_c-_csgolotto_complaint.pdf

[8] https://arstechnica.com/gaming/2018/02/no-video-game-loot-boxes-for-buyers-under-21-says-proposed-hawaii-bills/[

[9]https://www.polygon.com/2018/11/27/18115365/ftc-loot-crate-investigation-senator-hassan

[10] https://www.gameprime.org/2018/05/loot-boxes-illegal/[

Ghost Guns

•March 15, 2019 • 6 Comments

3D-printed guns, or “Ghost Guns” are much more than a sci-fi fantasy, they have become a part of our reality. In 2012, a Texas based organization named Defense Distributed, posted Computer Aided Design files (CAD Files) to their website that allowed anyone with a 3D printer and an internet connection to manufacture their own, untraceable gun in their garage. The State Department, having the authority to regulate those who export firearms under 22 U.S.C. § 2778, found that Defense Distributed had violated the section by engaging in unregulated export of firearms data in violation of the statute, ordered the CAD Files be taken down. Defense Distributed filed suit in 2015, claiming that the State Department’s ruling was an unconstitutional prior restraint on speech, in violation of the First Amendment. After winning a few cases, the State Department settled under unusual circumstances, and decided to allow Defense Distributed to post the CAD Files online without restriction. In response, 8 states (MA, CT, NJ, PA, OR, MD, NY, WA) as well as the District of Columbia, sued the State Department arguing that the settlement violates the Administrative Procedure Act, as well as the Tenth Amendment.[1] The states were also able to secure an injunction prohibiting Defense Distributed from posting the plans until the suit is concluded.

Defense Distributed initially argued that the State Department’s enforcement of its’ regulation was an unconstitutional prior restraint. A prior restraint is some kind of rule/regulation that prohibits speech before the speech actually occurs. A prior restraint is only unconstitutional if it covers speech which is otherwise protected by the First Amendment. While internet postings are typically protected speech under the First Amendment, the Court has outlined certain kinds of speech that are not protected. Incitement, for example, is speech that goes beyond normal speech and calls for imminent lawless action/violence. While Defense Distributed doesn’t explicitly call for its users to download the plans and commit acts of violence, can they advance any other justification for posting plans to create untraceable firearms and firearm parts for anyone to download?

Defense Distributed did try to argue that their position was to advance an individual’s access to firearms in exercise of their Second Amendment rights. However, Second Amendment rights are not absolute. For example, a person must usually be 18 to purchase a long rifle, and 21 to purchase a handgun. Additionally, virtually all states require a person to obtain a license before they can carry a gun on them regularly. Even then, there exists restrictions for carrying firearms at schools, hospitals, government buildings, and places that serve alcohol as a primary source of their revenue. All firearms manufactured in the United States are required to have a trackable serial number printed on them, and, unless special circumstances exist, all sales are required to be recorded and reported.

In response to the Ghost Gun phenomenon, a few states have introduced amendments to existing gun laws to cover Gun laws. New York, for example, introduced an amendment criminalizing the manufacture and sharing of information necessary to create a Ghost Gun.[2] The amendment explicitly defines Ghost Guns and criminalizes their possession and manufacture. (if unregistered, without a serial number, and not made by a licensed gunsmith). Washington has introduced an amendment with a similar effect. Although the bill does not specifically contain the phrase “Ghost Gun”, an entire section dedicated to “Untraceable Firearms” and goes to great length to cover firearms without serial numbers registered with a federally licensed manufacturer.[3] The firearm further zeros in on Ghost Guns by creating an exception for antique firearms. New Jersey is another example of a state that has put forth an amendment addressing Ghost Guns. That amendment specifically criminalizes the purchase of firearms parts to, “illegally manufacture an untraceable firearm, known as Ghost Guns.”[4]

Comparing each of these statutes, it seems like the main concern of the states is the fact that these guns cannot be traced. When a person prints a gun in their garage, they are not printing serial numbers on their new firearms, nor can they print a valid serial number without knowing which numbers can already be attributed to other firearms. This is further evidenced by the Court rulings in the 2015 Defense Distributed case. The Court there weighed the interest that the states put forward (security, protection of citizens, ability to trace firearms) and the permanent harms that the states would suffer were these interests abridged, against the harms Defense Distributed would suffer to their First Amendment interests if they were allowed to continue posting the plans.

Legislation like this is not likely to be overruled, especially in light of the previously mentioned existing restrictions on a person’s right to bear arms. These statutes are narrowly tailored to address a specific problem. These statutes also attempt to bring a new form of firearms into compliance with existing registration laws. The bigger obstacle that I see regulators facing is getting these statutes passed in their states. Many states (although not those mentioned here) have a strong majority of people who believe in an absolute exercise of the Second Amendment. We likely won’t see restrictions in states like these any time soon. How effective can restrictions be if they aren’t applying to everyone?

Is a harms balancing test the right way to determine which rights matter more in a case like this? If so, is there ever a way to determine which rights are weighed heavier than others? When looking at the harm, should that test be whether the harm is temporary or permanent? What if the harm suffered by both sides is temporary, or both sides suffer a permanent harm?

Is the untraceable nature of Ghost Guns the biggest reason for concern? What about their creation and use by terrorist organizations, or those wishing to carry out mass shootings? We have seen that our current registration system does little to protect these occurrences anyway.

Where should gun groups like the NRA come down on this issue? On one hand, they have built their entire platform on the absolute protection of the Second Amendment. On the other hand, big gun manufacturers (Beretta, Smith & Wesson, Glock…) are the main source of revenue fueling the NRA. Ghost Guns have the potential to cut into their profits greatly, so their existence seems to present future financial concerns for these manufacturers. Will preservation of profits cause them to take a major pro-regulation stance on Ghost Guns?


[1] https://www.atg.wa.gov/news/news-releases/ag-ferguson-sues-over-trump-administration-giving-dangerous-individuals-access-3d.

[2] https://legislation.nysenate.gov/pdf/bills/2019/S2143.

[3] http://lawfilesext.leg.wa.gov/biennium/2019-20/Pdf/Bills/Senate%20Bills/5061-S.pdf.

[4] https://www.njleg.state.nj.us/2018/Bills/S2500/2465_I1.HTM.

LAW & TECH STORIES OF INTEREST

•March 13, 2019 • Leave a Comment

While we were on break there were several news stories touching on issues we’ve discussed in class. Some of them may be relevant to your papers, but even if not, they are all interesting.  I’ve cobbled them together from across a number of sources.

HACKING/UNAUTHORIZED ACCESS:

Three men cop to $21 million vishing and smishing scheme

JavaScript infinite alert prank lands 13- year old Japanese girl in hot water

ALGORITHMS

Over 8,000 marijuana convictions in San Francisco cleared thanks to computer algorithm

CRYPTOCURRENCY

Ethereum, a 51% Attack and How to Change an Unchangeable Blockchain

SPYWARE

NSA has shut down phone call record surveillance

Enjoy.

 

 

 

 

 

The Future of Cyber-Aggression in an International Framework

•February 24, 2019 • 6 Comments

What is a Cyber-Attack?

            In order to full understand how states can defend against and deter cyber-attacks, we must understand exactly what a cyber-attack is. This is more than a technicality, this boundary must be defined to an extent where it is clear how an executive can defend his nation against attacks. [3].

            The definition of a cyber-attack is not a matter of consensus. The commonly accepted usage of the word includes criminal, espionage, and terrorist activities in addition to military ones.[3] The RAND Project AIR FORCE study defines cyber war as “ a campaign of cyber attacks launched by one entity against a State and its society, primarily but not exclusively for the purpose of affecting the targets state’s behavior.” [3]. While this definition limits the target of cyber aggression to being a state actor, it helps illustrate the multiple disparities in defining cyber-attacks.

Deterrence

            There are several issues that preclude the effective deterrence of state-sponsored cyber attacks. The first is that while there has been progress in creating a set of cyberspace norms, there are hardly any consequences for states that violate them [2].  The framework that has been developed to this point includes the application of international law to cyberspace, the acceptance of certain voluntary norms of state behavior, and the adoption of confidence and transparency building measures. [4]. As far as the establishment of norms have gone, none have made a bigger impact than that of inaction. The global community has not done an effective job of punishing and deterring bad actors in cyberspace. [4].

            It is true that deterrence is a complex issue. An effective framework involves a combination of strengthening defenses, establishing expectations for international actors, and publicly declaring a strong policy. [4]. Progress has been made in these areas in the global framework, but the biggest problem facing realistic enforcement lies with attribution.

Attribution

            Attributing state-sponsored cyber-attacks to specific entities is difficult and the methodologies for doing so reliably are still evolving. The most exhaustive means to protect a company are typically too expensive for most companies and information about attacks on governments is usually classified [1]. Unlike the physical world, there are no indicators giving warning or location of a cyber-attack [4]. Moreover, there is no political standard for attributing an attack to a state, the evidence would have to require almost 100% certainty [4].

There are some ideas about how this can be remedied. Chris Painter, a former cyber diplomat at the US State Department believes that states need to speed up attribution of cyber attacks and create a credible response quickly, all as a part of a collective multilateral action plan [2]. Delays in attribution are due in part to the technical difficulties of gathering evidence and balancing the benefits of going public against the risk of compromising the sources and methods of intelligence gathering [2]. Painter argues that all of these cycles need to be shortened [2]. Painter calls for states to “name-shame” states after attacks, which can be an effective tool when used collectively. [2]. However, it is clear that this tool has its shortcomings when it comes to states like Russia and North Korea. These states would not be affected by “name-shaming” because these actors think their powers are enhanced by having their actions attributed to them. [2].

A more effective way to deter state cyber-attacks would be for states to use diplomatic, economic, and law enforcement sanctions on states following attacks. [2}. However, this needs to be done more regularly and timely for states to take these threats seriously. [2]. More example, the US government has had the power to impose cyber-sanctions since 2015 but has only used them twice. [2]. These infrequent sanctions are not enough to deter state actors from maliciously attacking states in order to influence their actions.

Responses

            In order to develop a more effective international framework for deterrence, a few responses need to be considered. First, measures taken against bad actors need to be more than symbolic, they must have the potential to change that actors behavior [4]. The relationships between the states obviously need to be considered, but potential escalation needs to be in the back of the mind of every decision-maker. This is particularly difficult because escalation paths aren’t clearly defined for events that originate in cyberspace [4].

            Collective action against a bad actor is almost always a better way to address the situation than to respond as one state. The key problem here is that of information sharing [4]. Every state will want to satisfy itself before taking the political step of attribution, and sharing sensitive information among states with different levels of capability to protect that information is a tough issue [4].

           State-sponsored cyber attacks are on the rise, and not just on government and military organizations. The Digital and Cyberspace policy program at the Council on Foreign Relations have aggregated nearly 300 state-attributed incidents half of which were public sector targets. And these are just the ones that are publicly available [1]. It is clear that much more attention needs to be given to cyber-attacks the international community. In the short-term, states need to make more visible and timely reactions to cyber-attacks to reinforce the notion that bad actors will have to pay dearly for this sort of behavior.

  1. https://blogs.thomsonreuters.com/answerson/state-sponsored-cyberattacks/
  2. https://www.zdnet.com/article/state-sponsored-cyber-attacks-deserve-tougher-responses-aspi-report/
  3. https://apps.dtic.mil/dtic/tr/fulltext/u2/a553344.pdf
  4. https://www.aspi.org.au/report/deterrence-cyberspace

Cryptocurrency and the Rise of Ransomware Attacks

•February 18, 2019 • 6 Comments

Cryptocurrency is virtual currency that is not issued by a central authority or subject to government manipulation. [1] In this way, cryptocurrency can be compared to gold bars which people often buy as an investment with the hopes that they will increase in value. [1] While fiat currency, such as dollar bills, is issued by a central authority, is subject to government manipulation, and will not increase in value. [1]

Ransomware attacks are when cybercriminals encrypt victim’s files using data-encrypting malware and demand payment, usually in the form of cryptocurrency, as a means for victims to get their files back. [2] Ransomware attacks have been going on for quite some time. [2] In the past, before bitcoin and other cryptocurrencies were around, cybercriminals would use online payment methods such as PayPal or Western Union which were linked to a bank account leaving the cybercriminals vulnerable to discovery. [2] Cybercriminals even went so far as to use postal services to receive payment from their victims. [2] However, since the dawn of cryptocurrency the ransomware attacks are becoming more frequent possibly because of cybercriminals ability to remain anonymous and avoid law enforcement. [2]

The U.S. recently indicted two Iranian nationals, Faramarz Shahi Savandi and Mohammad Mehdi Shah Mansouri, for alleged ransomware attacks that had been going on for years and affected more than 200 victims. [5] The attackers demanded bitcoins which resulted in more than $6 million in ransom payments. [5] Once ransom money was paid, two other Iranian nationals allegedly converted the bitcoins into Iranian riyals. [5] This is not the first time that the U.S. has issued charges over a ransomware attack. [5] The U.S. also issued charges against a North Korean man for a ransomware attack that affected FedEx, Britain’s National Health Service, and others. [5]

Bitcoin is often the choice for cybercriminals when demanding payment from victims because it has a certain level of anonymity and can be easily purchased by victims for payment. [2] Cybercriminals try their best to remain anonymous when demanding bitcoin by using mixing services which are money laundering for cryptocurrencies. [2] Instead of making it easy for law enforcement to find the specific wallet that the victim’s payments are going to and potentially find out who is behind the attack, cybercriminals will take all the payments, mix them with tens of thousands of other wallets, and eventually get their ransom payments back after they have been mixed with other money. [2]

Unfortunately for cybercriminals, law enforcement has been taking advantage of the fact that bitcoin is not completely anonymous. [3] Law enforcement can use the blockchain which is where the transactions and addresses of bitcoin users are recorded, to track down these cybercriminals. [3] Sure mixing services exist for bitcoin and other cryptocurrencies, but usually the only ones using mixing services are those engaged in illegal activity, meaning as soon as you use a mixing service you’ve already raised a red flag. [3] Now unfortunately for law enforcement, cybercriminals are turning to a new type of cryptocurrency in their attempts to remain anonymous. [3]

Monero is a new type of cryptocurrency which launched in 2014 which provides new benefits for cybercriminals to remain anonymous. [3] Monero uses ring signatures to obscure the identity of senders and recipients. [3] Ring signatures combine a user’s account keys with public keys from monero’s blockchain to create a list of possible signatures, meaning that you cannot link one particular signature to a specific user. [3] Monero also uses stealth addresses which are randomly generated, one-time addresses created for each transaction on behalf of the recipient. [3] As mentioned earlier, mixing services are available for certain cryptocurrencies but when you use a mixing service it often raises a red flag. [3] With monero however, all of the coins used in transactions are always mixed so no red flags are raised. [3] Monero users also have the ability to selectively share their account transactions through a view key. [3] One downfall that monero faced was that they obscured the senders and recipients of transactions but not the amount of the transaction. [3] However, monero introduced RingCT that not only concealed the identity of the sender and recipient but also the amount of the transaction. [3] With the level of privacy that monero has, it offers fungibility. [3] Since monero transactions are untraceable, no two coins are different from one another. [3] With bitcoin however, the transaction history is recorded on the blockchain which means bitcoins associated with theft may be shunned by merchants and exchanges. [3]

Ransomware attacks may cause a lot of trouble and inconvenience but cybercriminals also partake in cryptojacking. [4] Cryptocurrencies are generated through a process known as mining. [1] Every cryptocurrency has a finite number of units that can be mined so the integrity of the cryptocurrency is not diluted. [1] To mine cryptocurrency however, you need a lot of processing power, so these cryptojackers look to large enterprises that have this processing power, one of which happened to be Tesla. [4] Cryptojackers found an administrative portal for cloud application management that was not password protected and went in with mining malware. [4] With Tesla already using so much electricity, the cryptojackers could have went unnoticed for quite some time if it wasn’t for RedLock who noticed Tesla’s open server being attacked by cryptomining. [4]

It seems that cryptocurrencies have made cybercriminal’s jobs easier since cryptocurrencies have ways to help users remain anonymous. Further, cryptocurrencies are only becoming more private, which does not help law enforcement in their search to uncover perpetrators of ransomware attacks. Cryptocurrencies may be helping make ransomware attacker’s jobs easier, but criminals will always find a way to get around safeguards to get what they want. With new cryptocurrencies constantly being introduced, it would not be surprising to see an increase in cryptojacking as well.

Questions to consider:

  • Would restrictions placed on mixing services help in the search for ransomware perpetrators? Would restrictions on mixing services be allowed?
  • Monero is gaining acceptance on the dark web. Is monero the new bitcoin for cybercriminals?
  • Anyone can partake in mining. Should cryptocurrencies make mining cryptocurrencies harder (require more processing power) or would that only add to value and appeal?

[1] https://www.forbes.com/sites/forbestechcouncil/2017/08/03/how-cryptocurrencies-are-fueling-ransomware-attacks-and-other-cybercrimes/#6bd4ec0e3c15

[2] https://www.zdnet.com/article/how-bitcoin-helped-fuel-an-explosion-in-ransomware-attacks/

[3] https://www.coindesk.com/what-to-know-before-trading-monero

[4] https://www.wired.com/story/cryptojacking-tesla-amazon-cloud/

[5] https://ethereumworldnews.com/bitcoin-ransomware-the-u-s-indicts-iranians-over-6-million-cryptocurrency-cyber-crimes/

Algorithms in Criminal Justice – Biased Technology?

•February 9, 2019 • 6 Comments

As the interest in artificial intelligence has blossomed, so too has the idea of using artificial intelligence in the criminal justice context. Artificial intelligence has been used in various aspects of the criminal justice system such as algorithms to determine sentencing and risk assessments when setting bail. Although the use of artificial intelligence and algorithms are not new, government agencies are coming up with innovative ways to use it.

Cities across the country have been plagued with gun violence, which has left police departments, citizens, and the government searching for solutions. Notification of gunfire in cities where residents prefer not to notify law enforcement may help police respond more quickly in areas where police otherwise may not have been notified at all. ShotSpotter is an artificial intelligence technology that detects gunfire from sensors that are placed thirty feet in the air and under a mile apart.[1] ShotSpotter then filters the data through an algorithm and isolates the sound of gunfire and is able to locate gunfire as close as ten feet from where the shot occurred.[1]

ShotSpotter is not the first technology to be able to detect gunfire. The Naval Research Laboratory and Maryland Advanced Development Laboratory have worked on the detection of small arms gunfire using land-based systems and on detection of larger artillery using airborne-based systems for years.[2] The main detection mechanism that has been used to sense the muzzle flash is called Mid-Wave InfraRed (MWIR).[2] MWIR has been used for detection of small arms gunfire from enemy snipers as early as the 1960’s.[2]

By 1996, a program called VIPER was using MWIR to detect gunfire in tests performed on military bases.[2] During the VIPER program, the researchers fired over 15,000 rounds of ammunition from different types of small arms, with small arms being considered anything 50 caliber and under.[2] The tests showed that all of the small arms that were tested could be detected at and beyond the effective firing range of the gun.[2] With technological improvements came improvements in cameras, namely wide angle cameras. These improvements significantly improved the ratio between signal and clutter in the detection of gunfire which in turn caused a reduction in false alarms being recorded.[2]

By 2002, attention had shifted to airborne-based systems in Forward Eyes UAV’s to detect larger artillery in response to the Washington D.C. sniper attacks.[2] These airborne-based systems were designed to detect large artillery such as mortars, from a relatively low altitude and then provide the GPS coordinates of the shooting event back to a ground station.[2]

It was only a matter of time before technology like ShotSpotter made its way from military base tests to use by law enforcement. ShotSpotter is made by California company SST.[1] According to SST, ShotSpotter helps fill a gap in current data relating to gun violence because it overlooks shots fired to scare people, kill animals, and gun battles in which bystanders do not call the police.[1] According to SST, less than 20% of gunshots result in a 911 call.[1] This data is overlooked because the data generally comes from three main sources: 911 calls, mandatory reports hospitals file when they treat gunshot victims, and coroner reports on homicides or suicides by gunshot.[1]

In 2011, SST made ShotSpotter more affordable for small to mid-size cities when the company began using a cloud-hosting platform with the program.[1] As the cost went down, more police departments began using ShotSpotter to help detect gunfire. Today, approximately seventy cities across the country are using ShotSpotter.[1]

Despite its increase in use among police agencies around the country, ShotSpotter does not have all the bugs worked out. For example, the program has difficulty differentiating between the sounds of gunfire from the sounds of firecrackers or cars backfiring.[1]

Even more troublesome is the idea that police may use ShotSpotter selectively by arresting more people in some neighborhoods than in others.[1] There is little doubt that the criminal justice system disproportionately preys on people of color. In the last thirty years, the prison population has quadrupled, and of those incarcerated, 58% are black or Hispanic, despite the fact that these groups only make up about a quarter of the country’s total population.[3] Even more worrisome is that black people are sent to prison at ten times the rate for drug crimes even though white people use drugs five times more than blacks.[3]

As a result, it should not be surprising that these racial disparities also boil over into the artificial intelligence arena of the criminal justice system. Currently, one of the most common uses of artificial intelligence in the criminal justice system is with risk assessment tools. These risk assessment tools analyze data that may be correlated with future criminal activity and are used when imposing sentences, setting bail, and determining release.[3] Unfortunately, although not surprisingly, these artificial intelligence recidivism calculations correlate strongly with race.[3]

One study looked at over 7,000 risk scores that were assigned to people who were arrested in Broward County, FL in 2013 and 2014 using a program from a for-profit company called Northpointe.[4] The results demonstrated that the program wrongly flagged black individuals as future criminals almost twice the rate compared to white individuals.[4] Moreover, white individuals were wrongly labeled as “low-risk” more often than black individuals.[4] This disparity could not be explained by an individual’s prior crimes or the types of crimes the individual was arrested for.[4] In an independent statistical test that isolated the effect of race from other factors such as recidivism, criminal history, age, and gender, black individuals were still 77% more likely to be labeled at a higher risk of committing future violent crimes and 45% more likely to be predicted to commit any type of crime.[4] Northpointe, disagreed with these results.[4]

Northpointe’s risk assessment tools are the most widely used assessment tools in the country.[4] Although Northpointe does not publicly disclose the factors it uses to calculate a person’s risk score, the assessment program requires answers to 137 questions, none of which are related to race.[4] However, Northpointe’s program asks questions such as:

“Was one of your parents ever sent to jail/prison?”

“How many of your friends/acquaintances are taking drugs illegally?”

“How often did you get in fights while at school?”[4]

 It is unclear whether the biased results from artificial intelligence programs are due to the bias being rooted in the data that is used to train the algorithm in the first place, the humans that program the risk assessment algorithms, or some combination of the two.[3] Only a few studies have been done on these risk assessment tools. As a result, determining the cause of the bias and the accuracy and validity of these tools remains a mystery.[4] Researches examined nineteen risk assessment methodologies used in the U.S. and found that “validity had only been examined in one or two studies” and that “frequently those investigations were completed by the same people who developed the instrument.”[4] These findings raise the issue of bias in the studies themselves. With the validity and accuracy of these risk assessment programs questionable at best, it should be alarming that defendants rarely have an opportunity to challenge their risk assessment scores and this could create contemporary constitutional problems that courts need to be ready to address.[4]

A 2017 Harvard Political Review article suggests ways to decrease bias in artificial intelligence in the criminal justice arena. First, open data and algorithmic transparency should be emphasized.[3] This will make the data available to researchers who can investigate the validity and accuracy of these risk assessment programs.[3] Second, the U.S. government should increase its standards for the technology companies it contracts with or even build the technology internally.[4] For example, the government should run extensive simulations using fake information before a contractor is selected to ensure no discriminatory outcomes are present in the private companies algorithms.[4] Furthermore, there should be a special commission put in place to ensure the algorithms are not biased and encourage transparency with incentives through grant funding using a top-down approach to enforce standards.[4]

New York City is the first city to create a task force on automated decision systems.[5] The task force will recommend how each agency within the city should be held accountable for using algorithms to make important decisions.[5] AI Now proposed a framework for New York City’s task force that is based on Algorithmic Impact Assessments (AIAs).[5] There are four main goals of AIA’s:

The first goal deals with the public’s right to know about the algorithms being used in their communities and how they are used.[5] AI Now suggests publicly listing and describing the systems that are used to make important decisions that affect identifiable individuals or groups and that this information should include the purpose, reach, and potential public impact of the use of the algorithms.[5]

The second goal ensures accountability with the use of algorithmic systems by providing opportunities for external researchers to review, audit, and assess the systems being used to be able to detect potential problems.[5] AI Now suggests possibly even an independent, government-wide body that oversees the accessibility to researchers.[5]

The third goal is to increase expertise and capacity within public agency’s so that they are able to anticipate issues such as disparate impacts or due process violations on their own without relying on a third party to intervene.[5] To retain public trust, agencies must be experts on their own algorithmic systems.[5]

The forth goal ensures the public can respond to and dispute an agency’s approach to algorithmic accountability which will further instill public trust into government agencies.[5] Moreover, due process will be strengthened by offering the public the opportunity to work with agencies on the use of algorithmic systems before, during, and after the assessment.[5]

With ShotSpotter taking off before issues surrounding algorithms used in risk assessment are solved, it will be interesting to observe whether ShotSpotter has the same issues and if so, if it will be handled in the same way. Like with any new technology or artificial intelligence there are many questions to be considered.

  • Will ShotSpotter have the same bias issues as other forms of algorithmic technologies such as risk assessment tools? If so, how?Would the issue of bias be addressed in the method suggested for risk assessment bias or a different way? If not, how is ShotSpotter different?
  • What constitutional issues could be raised by the use of artificial intelligence such as ShotSpotter or risk assessment algorithms? How could these constitutional issues be overcome while still leaving a place for the use of artificial intelligence and algorithms in the criminal justice context?
  • Finally, what other goals should be addressed by a task force like the one created in New York? How would a task force like this be implemented?



[1] https://singularityhub.com/2013/08/09/sensors-report-gunfire-directly-to-police-in-70-u-s-cities-no-911-call-needed/#sm.0000ql5bsyqsadi3112osqog3aq0d
[2] https://apps.dtic.mil/dtic/tr/fulltext/u2/a460225.pdf
needed/#sm.0000ql5bsyqsadi3112osqog3aq0d
[3] http://harvardpolitics.com/online/artificially-intelligent-criminal-justice-reform/
[4] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[5] https://medium.com/@AINowInstitute/algorithmic-impact-assessments-toward-accountable-automation-in-public-agencies-bd9856e6fdde