Does the FCC’s plan to “Unlock the Box” inadvertently unlock piracy?

•October 15, 2016 • 10 Comments

Today, 99% of cable and satellite customers pay an annual average of $231 per household to rent a set-top box from their service provider in order to view the programming they already pay for.[1] This rental fee results in an annual profit of almost $20 billion for the cable and satellite companies (aka “MVPDs” or multichannel video programming distributors), on top of the profits they already make from subscription fees.[2] As Federal Communications Commission (“FCC”) Chairman Wheeler explained, this “lack of competition has meant few choices and high prices for consumers.”[3]

Congress recognized this problem 20 years ago – passing the Telecommunications Act of 1996 and adding Section 629 to the Communications Act in order to increase the commercial availability of third-party set-top boxes.[4] Back then, Congress compared the idea to the telephone industry: if you can use a landline purchased at Walmart to call someone through your AT&T service, why should you be forced to rent a set-top box from your MVPD to watch their cable or satellite service? And Congress continues to make that analogy updating it with more modern technology like cellphones and wifi routers.[5] Unfortunately, Congress’s legislation did little to fix the problem.[6]

In order to meet Congress’s goal, the FCC established the Downloadable Security Technical Advisory Committee (“DSTAC”) in accordance with the STELA Reauthorization Act of 2014.[7] The Committee – consisting of MVPDs, device manufacturers, production companies, and public interest groups – compiled a report which outlined recommendations for creating a security system that would allow consumers to view their MVPD’s programming through a third-party set-top box while protecting that content from infringement.

MVPDs and the entertainment industry suggested a Proprietary Applications approach that allows MVPDs to retain control over the consumer experience.[8] MVPDs would create apps that could be downloaded onto third-party set-top boxes and devices like phones, smart TVs, and tablets. These apps would allow MVPDs to uniformly control how the programming is presented and what additional features are offered.[9] Additionally, MVPDs would utilize a security system of their choice supported by “royalty free and open source” HTML5. MVPDs explained that this option complied with copyright law and existing licensing agreements. (However, critics of this approach have noted that the market fails to be truly competitive if MVPD’s retain control over the consumer experience, since third-party manufacturers are not allowed to invent new features that entice consumers to purchase third-party set-top boxes instead.)

Consumer electronics advocates and the tech industry supported the Competitive Navigation approach which would use a virtual head end system and link protection (like DTCP-IP) in the cloud. Under this approach, MVPDs would transfer three Information Flows to third-party devices: service discovery data (information that provides viewers with details about the programming like channels, program titles, ratings, airtimes, etc.), entitlement data (information that protects copyright by ensuring viewers only access and copy programming which they are authorized to access or copy), and content delivery (the actual programming.) Additionally, third-party devices would be able to customize the viewing experience by adding additional features, by reordering how the programming appears, by adding additional content like YouTube videos, and more. Finally, in order to further prevent the theft and misuse of copyrighted programming, MVPDs would choose “at least one content protection system that is openly licensed on reasonable and non-discriminatory terms.” Third-party manufacturers would then develop boxes using at least one of those security systems and market those boxes to that MVPD’s consumers. This security regime was modeled after the smart TV industry and protects programming from piracy much as it is protected under the current CableCARD regime.

After reviewing these recommendations, the FCC choose the Competitive Navigation approach and issued the First Notice of Proposed Rulemaking which would allow MVPD subscribers to “watch what they pay for wherever they want, however they want, and whenever they want, and pay less money to do so, making it as easy to buy an innovative means of accessing multichannel video programming (such as an app, smart TV, or set-top box) as it is to buy a cell phone or TV.”

Senator Markey was a key player in the development of the above Acts and was thrilled to see the FCC finally formulating rules that could fix the problem:

“The FCC is using authority clearly provided by Congress to better allow consumers to choose which device to watch programming for which they have already paid. I applaud the FCC for its efforts and encourage the Commission to finalize these rules. It’s time we add set-top boxes to the list of all of the other consumer technologies that have benefited from strong rules that fostered choice, innovation, and competition.”[10]

While Senator Markey was joined by consumer advocates, tech companies, and other interest groups in supporting the proposal, the US Copyright Office, MVPDs, and the entertainment industry strongly opposed the rules for their implication on copyright law and licensing agreements.

At the request of Congress, the Copyright Office addressed the potential copyright implications of the proposal, supporting the goals but laying out five areas where copyright law was implicated. These five areas included the exclusive right to license; the exclusive right to perform, display, reproduce or distribute; the copyright interests of MVPDs; the security issues; and the enforcement issues. Only the second, fourth, and fifth have the most obvious criminal implications so they will be the focus of this blog post.

The Copyright Office claims the proposal could result in copyright infringement, because the Entitlement Data does not do enough to prevent infringement of copyright owners’ exclusive right to perform, display, reproduce, or distribute their work.[11] Taking it one step further, some in the entertainment industry voiced their concern that the proposal invites piracy of their copyrighted works.

Specifically, the entertainment industry believes the proposal creates an opening and an incentive for third-party manufacturers to create set-top boxes that are designed with piracy in mind.[12] They believe applications could be added to third-party set-top boxes in order to present pirated content alongside legally licensed programming. In reality, these “pirate boxes” already exist in the American marketplace. You can watch pirated content free of charge and free of commercials by buying a pirate box online or at your local shopping mall for about $350. (It is important to note, however, that these pirate boxes are very rarely used in America. In fact, the overwhelming majority of copyright piracy in America occurs via online file sharing.)

The Electronic Frontier Foundation (“EFF”), however, highlights that “nothing in the proposed rules permits any party to obtain unauthorized access to programming.”[13] Piracy and pirate boxes are illegal and will continue to be illegal if the proposal is enacted.

In regards to security issues, the proposal makes clear that each MVPD must choose at least one licensable security system, and each third-party box must license and utilize one of those security systems. These security systems are technological measures that control access to the MVPD’s programming.

The Digital Millennium Copyright Act forbids “circumvent[ing] a technological measure that effectively controls access” to copyrighted works online.[14] The DMCA protects copyright online, and those protections apply to third-party set-top boxes since they utilize internet protocol to deliver programming to the viewer’s television set. Thus, third-party manufacturers that make pirate boxes to circumvent or ignore Entitlement Data are breaking the law and are subject to criminal punishment. So to are consumers that use those boxes. Manufacturers and consumers can, should, and will be criminally punished if they seek to circumvent the security systems these set-top boxes use to protect copyrighted programming.

Finally, the Copyright Office suggests the FCC needs to more thoroughly analyze compliance enforcement mechanisms, because the proposal “underestimate[s] the barriers to invoking copyright remedies to redress potential violations by third-party actors purporting to operate under this rule.” For example, it is hard to enforce the DMCA against foreign pirate box manufacturer, and it is hard for a content creator to seek damages against a foreign pirate box manufacturer.

There are a number of reasons enforcement of copyright is difficult. First, MVPDs and content creators have no adequate way to monitor infringement. Second, copyright litigation is inherently expensive, time consuming, and uncertain. Third, foreign perpetrators are even harder to punish and successfully enforce a judgment against.

But that is not because of this proposal. Copyright is hard to enforce with or without these new regulations. The FCC points out that that the proposal does not change a copyright holder’s rights or remedies. Likewise, EFF echoes that “the Unlock the Box rules do not affect the status or enforcement of copyrights.” In essence, the rules do no legalize piracy or pirate boxes. The rules do not change the fact that it would be hard to punish or seek damages against a pirate box manufacturer. That is not the purpose of the proposal, and that is not the FCC’s job.

The FCC’s proposal does not alter the criminal protections or punishments for piracy. Piracy exists, but the FCC is not responsible for protecting copyright – the US Copyright Office is (or, where new legislation is necessary, Congress is.) The FCC cannot regulate copyright, so it should not attempt to do so in these rules. The FCC was asked to regulate the set-top box industry to ensure openness and competitiveness, and that is what these rules do. Thus, the first set of proposed rules should be enacted.

Unfortunately, the incessant lobbying of MVPDs and the entertainment industry has lead the FCC to abandon its first proposal in favor of a second which largely resembles the Proprietary Applications approach.[15] Not-so-coincidentally, the second proposal (which mind you was their idea in the first place) is now strongly opposed by MVPDs and content creators (which mind you are often in the same corporate family tree like NBCUniversal and Comcast) – essentially forcing the FCC back to the drawing board.

It looks like we may go another twenty years without enacting regulations that finally address Congress’s goal of opening up a competitive market for set-top boxes as laid into law in 1996.

I leave you with a few questions:

  • Do either, neither, or both approaches sufficiently protect copyrighted programming from piracy?
  • Should more be done to stop the proliferation of piracy and the potential proliferation of pirate boxes in America or is the current state of copyright law sufficient? If so, what should be done?
  • What can be done to ensure that the enforcement process is effective in punishing pirates and in providing copyright holders with an effective means for seeking damages against pirates?
  • Considering the fact that MVPDs now oppose both proposals including their own Proprietary Applications approach, are they really upset about the copyright implications or are they actually afraid of missing out on $20 billion worth of rental fees?




[4] See Telecommunications Act of 1996, Pub. L. No. 104-104, § 304, 110 Stat. 56, 125-126 (1996)


[6] THE FCC AND ANCILLARY POWER: WHAT CAN IT TRULY REGULATE?  36 Hastings Comm. & Ent L.J. 311 (Summer 2014)








[14] 17 U.S.C. § 1201


Bitcoin: A Risky Investment for All Consumers

•October 9, 2016 • 8 Comments

In 2009, Satoshi Nakamoto launched Bitcoin, the world’s first cryptocurrency, a decentralized digital currency which utilizes an encrypted payment system to allow its users to conduct transactions without the need of a financial institution or other “trusted third party”.  [1].  Without any government control over the money supply or interference from third parties, Bitcoin is a “currency” directly controlled by its users. Id.  There are many advantages to using Bitcoin such as low-cost transactions, increased privacy, encrypted security and a self-regulated money supply. [2].  In addition, Bitcoin is not only used as an alternate currency, it can be sold and converted to real cash which makes it an attractive investment asset. [3].  A recent article from the International Business Times stated that 80 percent of the users of Coinbase, a Bitcoin exchange company, are using Bitcoin for investment purposes. [4].    However, Bitcoin may pose as “currency”, it is not considered legal tender and there is uncertainty on how it should be treated under

However, Bitcoin may pose as “currency”, it is not considered legal tender and there is uncertainty on how it should be treated under current laws. [2]. This uncertainty puts investors at risk of significant financial loss with very few legal protections afforded to them. Id. For instance, a Hong Kong Bitcoin Exchange, known as Bitfinex, discovered a recent security breach and nearly 120,000 bitcoins were stolen. [5].  The theft resulted in a loss valued at 77 million U.S. dollars. Id.  Further, Bitfinex calculated as a generalized result of the breach, each user suffered an estimated loss of thirty-six percent of the value of their accounts. [6].  Without any involvement from a bank or financial institution, Bitcoins lost due to fraud or theft is difficult to trace. [7]. In addition, unlike U.S. dollars, Bitcoins are not federally insured which further limits a victim’s ability to recover for losses regarding Bitcoin. Id.

Several Federal agencies have made efforts to clarify the treatment of Bitcoin under relevant U.S. laws but it is still uncertain as to what exactly the government classifies Bitcoins as.  The Financial Crimes Enforcement Network (FinCEN) is a federal agency which serves the primary purpose of protecting against money laundering and other financial crimes. [8]   In a 2014 guidance, the FinCEN did not specifically define or categorize Bitcoin as a currency, commodity, security, etc. [9].  Instead, the agency held that the regulation of Bitcoin would be dependent on how the Bitcoins were being used. Id.  Further, the FinCEN guidance stated Bitcoin use could be divided into the following three categories:

  • Users – Users are merely individuals who use Bitcoin as a form of payment for goods and/or services. Users do not have to register as a money services business and are not subject to FinCEN regulation;
  • Exchangers – An Exchanger is a person or business who sells Bitcoins in exchange for real currency.  Exchangers must register as a money services business and are subject to FinCEN regulation.
  • Administrators – An Administrator is a person or business who controls the issuance and withdraw of virtual currency from regulation and like Exchangers, administrators must register as a money services business and are subject to FinCEN regulations. However, it seems Bitcoin would not apply to this category because its supply is not issued or reduced by a central authority. Id.

 The Commodity Futures Trading Commission (CFTC) issued an order against Coinflip, which operated Derivabit, an online website which Bitcoin users could engage in options and futures trading. [10].    In the order, the CFTC charged Coinflip for operating an illegal, unregistered options trading platform in direct violation of the Commodity Exchange Act (CEA). Id.  It is interesting because the CFTC’s reasoning was that they did not define Bitcoin as a “currency” and instead held that it would be treated as a commodity.  Id. A commodity is generally defined as “a basic good used in commerce that is interchangeable with other commodities of the same type.” [11].  The definition of a commodity under the CEA focuses more on agriculture products but includes that a commodity can be “all services, rights, and interests in which contracts for future delivery are presently or in the future dealt with.” [12].   Although the CFTC order seems to add further legal protections for consumers and investors who use Bitcoin, it is evident that ambiguity remains to how the cryptocurrency should be treated under the law.

 In SEC v. Shavers, a federal court in Texas added to the ambiguity surrounding Bitcoin regulation.  [13].  In Shavers, the SEC alleged that the Defendant fraudulently solicited lenders to invest into Bitcoin Savings and Trust (BTCST) promising at least a 1 percent return. Id.  Instead, Shavers’ false promises resulted in the investors suffering significant financial losses. Id. Shavers argued that the SEC did not have subject matter jurisdiction because Bitcoin were not securities and the transactions involved only Bitcoin and not any actual money. However, the federal court held that “Bitcoin is a currency or form of money, and investors wishing to invest in BTCST provided an investment in money.” In result, the Bitcoin investments were subject to federal security regulations and eventually Shavers was ordered to pay over $40 million in disgorged profits and prejudgment interest. [14].

As you can see, Bitcoin remains a risky “investment” due to its lack of uniformity under our current laws, leaving consumers and investors with little clarity of the protections afforded to them in their involvement with Bitcoin-related transactions.  Nevertheless, Bitcoin continues to develop as a promising technology used for business.  Despite the concerns with its regulation, Bitcoin has not lost any significant value, Bitcoin-related businesses continue to develop and many Fortune 500 companies such as Dell, Expedia and Microsoft now indirectly accept Bitcoin as payment. [15, 16].  It will be interesting to see if the government will continue to let federal agencies and courts interpret how Bitcoin should be regulated based on current law or if specific legislation will be implemented.

I’ll leave you with a few questions to consider:

(1) Do you think Bitcoin its best to classify Bitcoin as a commodity, currency, security, etc.?

(2) Should the federal government prevent people from investing in Bitcoin until more specific regulations are in place to protect its investors from losses due to fraud and/or theft?

(3) Do you think Bitcoin or another virtual currency will ever be considered legal tender in the United States?  Would this be a good or bad thing?

[1] Bitcoin: A Peer-to-Peer Electronic Cash System, Satoshi Nakamoto,

[2] Bitcoin: Questions, Answers, and Analysis of Legal Issues,



[5] Bitcoin Value Falls Off the Cliff after $77 Million Stolen in Hong Kong Exchange Hack,




[9] FinCEN on Virtual Trading Platform,

[10] Feds Target Bitcoin Options Site, Declare Cryptocurrencies as Commodities,



[13] SEC v. Shavers,


[15] Three Reasons Why Bitcoin Isn’t Dead Yet,


How Bitcoin is treated in the context of Anti-Money Laundering Regulation

•October 9, 2016 • 8 Comments

Virtual Currency (VC) existed before Bitcoin but most of those attempts involved a centralized set up where the processing of payments and control over the currency reside ultimately under the control of some individual figure. Bitcoin operates differently because it’s based around peer-to-peer connections, similar to torrenting, which allow people to anonymously transmit Bitcoins. The Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) uses the term “de-centralized convertible virtual currency” to refer to VC like Bitcoin noting the lack of a central repository and the ability of users to obtain the currency on their own.[1] For Bitcoin this process of obtaining currency without engaging in transactions is called mining. Due to the ability to obtain Bitcoin without engaging in transactions FinCEN makes sure to distinguish between three groups of persons and businesses involved with bitcoin. The first of these groups is a user which is someone who “obtains virtual currency to purchase goods and services” and they are not considered to be money transmitting businesses. The second is an exchanger which is someone who exchanges virtual currency for “real currency, funds, or other virtual currency.” The third is an administrator which is someone who issues or redeems virtual currency. The second and third groups are considered by FinCEN to be money transmitting businesses.

When combating money laundering authorities mainly turn to two different charges, unlicensed operating of a money services business and money laundering. While those charges often overlap (as they do in much of the Silk Road prosecution) there are important differences, starting with that unlicensed money services business does not require the State to prove the defendant had an intent to promote or facilitate unlawful activity. 18 U.S.C. § 1965(a)(1)(A) This expands the reach of anti-money laundering legislation and allows authorities to go after businesses that engage in borderline activity even if when evidence they knew of the illicit transactions is sparse. For both charges a central element is that “funds”, “money”, or a “payment instrument” must be involved, while this might seem self-evident it becomes important because the contention of many defendants is that Bitcoin does not qualify for these definitions and thus not for regulation or sanction under anti-money laundering legislation. See United State v. Ulbricht, 31 F.Supp.3d 540 (S.D.N.Y. 2014); United States v. Murgio, 2016 WL 5107128 (S.D.N.Y. Sept. 19, 2016); Florida v. Espinoza, F14-2923 (Fla. 11th Cir. Ct. July 22, 2016) (

Two jurisdictions have considered this issue and come to opposite decisions. The District Court of the Southern District of New York, in multiple cases, has determined that Bitcoin does count as money or funds. See Ulbricht, 31 F.Supp.3d 540 (S.D.N.Y. 2014); United States v. Faiella, 39 F.Supp.3d 544 (S.D.N.Y. 2014); United States v. Budovsky, 2015 WL 5602853 (S.D.N.Y. Sept. 23, 2015); Murgio, 2016 WL 5107128 (S.D.N.Y. Sept. 19, 2016). The 11th Circuit Court of Florida has held that Bitcoin is not money and does not fall within Florida’s anti-money laundering legislation. Espinoza, F14-2923 (Fla. 11th Cir. Ct. July 22, 2016).

In Ulbrict the Southern District of New York the United States charged Ulbricht with violating a variety of laws including anti-money laundering legislation based upon his creation and administration of the Silk Road site which acted as a market for illicit goods and services. Ulbricht, 31 F.Supp. 3d at 547. All transactions that occurred on the Silk Road involved Bitcoin rather than more traditional currencies and so if the court agreed with the defendant that Bitcoin is not money than all of those transactions would be unreachable with anti-money laundering legislation. Id. 548. Ultimately the court reasoned that because Bitcoins carry value, act as a medium of exchange, can be exchanged for other currencies, and derive their value from their ability to pay for things they should be treated as money. Id. The court noted the conflict between the view of FinCEN regulations which treats Virtual Currencies like Bitcoin as money and the IRS which treats Bitcoins as property for tax purposes and decided that the IRS’s treatment is irrelevant by looking directly at the statute’s definitions. Id. at 569. The court particularly noted that the anti-money laundering statute was intended by Congress to be broad and able to adapt to new ways that alleged criminals find to wash the proceeds of criminal activity. Id. at 570.

In Espinoza the court dismissed the information against the defendant holding that Bitcoin does not fall within Florida’s money laundering state. Espinoza, F14-2923, slip op. at 5. The defendant was targeted by police because he advertised on that he was available for bitcoin trades 24 hours a day under the username “Michaelhack.” Id. at 2. The only suggestion that the defendant was involved in criminal activity was the police asking if he’d be interested in purchasing stolen credit card information so which he responded ambivalently. Id. The court explained that there was “unquestionably no evidence that the Defendant did anything wrong” in this case. Id. at 7. The court argued that Bitcoin did not count as money because of its highly volatility and deferred to the IRS’s treatment of Bitcoin as property rather than currency. Id. at 3, 6. The Southern District of New York in Murgio addressed the decision of the court in Espinoza and expressed their significant disagreement, arguing that even under Florida law Bitcoin acted sufficiently like money to qualify for anti-money laundering regulation. Murgio, 2016 WL 5107128 at *8.

The questions I have for all of you is: (1) Do you believe Bitcoin should fall under anti-money laundering regulations?

(2) What factors and characteristics of Bitcoin make you think it should or should not be considered “money”?

(3) How important is the IRS’s treatment of Bitcoin as property when other parts of the government attempts to treat Bitcoin as currency?

(4) Do you think the lack of criminal activity by the defendant contributed to the Espinoza court’s decision that Bitcoin was not money?



The Use of Social Media in Criminal Prosecutions

•October 3, 2016 • 10 Comments

It’s safe to say that social media is here to stay. What was once thought to be a fad among teenagers and young adults, has now crept its way into the hands of all generations. We are in constant connection with each other and share information about our lives much more openly than ever before. For a lot of us it’s been the easiest way to keep in contact with distant family and friends and the fastest way to disseminate information to a large group of people. Social networking sites like Facebook, Twitter, Instagram and Snapchat have created communities where sharing personal details about your life such as travel plans, photos and videos of your daily activities, reposted articles and memes and status updates regarding your thoughts on a particular event are easily accessible. However, social networking sites are used for much more than surface level connectivity.

For employers and prosecutors, social networking sites are a gold mine. Employers can easily google a job candidate and one of the first results to pop up will probably be their Facebook or LinkedIn page. If the candidate’s page is public or the candidate’s privacy settings are lax, employers can legally view tons of information about that candidate to help them in their search for the right employee. Prosecutors also have this same privilege. Collecting evidence via social media may be nontraditional, but it’s the way of the future for trial attorneys in different areas of practice like personal injury, family law and criminal law. Sometimes evidence gathered from social media can be used as direct or circumstantial evidence in trial to prove that someone was at a particular location or that someone committed a crime.

While courts in Florida agree that social media evidence is discoverable, the admissibility of such evidence is still up for debate. Admissibility depends on two major factors: relevancy and authenticity. Relevancy is much easier to determine because if the evidence is not related to any part of the case, it does not need to be introduced into evidence. Authenticity on the other hand is a bit trickier when it comes to social media. Federal Rules of Evidence 901 provides that “to satisfy the requirement of authenticating or identifying an item of evidence, the proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is.” The rule also provides several examples of evidence that satisfy the requirement which include testimony of a witness with knowledge, non-expert opinion about handwriting, comparison by an expert witness and distinctive characteristics.

Authenticating evidence from social media can be complicated for a few reasons:

  1. The accused may not be the person who actually posted or communicated the message. Someone could’ve hacked into the accused’s account and posted and/or communicated messages that did not truly come from the accused. How can attorneys be sure that the evidence actually came from the accused?
  1. Another concern is that social media evidence, being electronic in nature, can be easily manipulated and misrepresent what was actually posted. How can attorneys determine if material has been altered in any way? How can courts and jurors be sure that evidence presented to them hasn’t been tampered with?
  1. Lastly, tagging people on social networking sites like Facebook and Instagram presents another hurdle. What if the wrong person is tagged or someone is mistakenly tagged in a photo? How can attorneys verify that the right person was tagged in the incriminating evidence?

The same examples given in FRE 901 can also be applied to the social media context. A witness with personal knowledge, distinctive characteristics about the evidence or expert witness testimony by an internet consultant are a few examples of methods that can be used to authenticate the information.

Prosecutors have free reign to search and collect evidence from social media that is public, but what happens when an accused person’s page is private and little to no information other than basic details are accessible? The Stored Communications Act regulates when an electronic communication service can and cannot release electronically stored information about you to private parties. The Stored Communications Act was created to preserve our privacy rights and protect our Fourth Amendments rights against unlawful searches and seizures. It protects information that is electronic communication, transmitted via an electronic communication service, maintained in electronic storage and not accessible to the general public. The government can only gain access to this sort of information after obtaining a warrant or subpoena in which they must prove “specific and articulable facts showing reasonable grounds to believe that the records or other information sought, are relevant and material to an ongoing criminal investigation.” American Civil Liberties Union v. U.S. Department of Justice. So for instance if a prosecutor wanted to collect evidence from an individual’s Facebook wall posts that was not public, the prosecutor would have to obtain a warrant because Facebook wall posts are protected by the Stored Communication Act (SCA). A New Jersey federal court in 2013 was one of the first cases to analyze the SCA’s application to the Facebook wall and held that an employee’s Facebook wall posts were protected by the SCA. Ehling v. Monmouth-Ocean Hospital Service Corp.

There are a number of cases where prosecutors have discovered evidence via social media and used it in court. Courts are still testing the waters and trying to figure out how social media evidence should be used and applied. In Disciplinary Counsel v. Brockler, 145 Ohio St.3d 270, 48 N.E.3d 557 (2016), an assistant county prosecutor assigned to a murder case was terminated from his job and suspended from the practice of law for one year for professional misconduct involving a fictitious Facebook account. In an attempt to disprove the defendant’s alibi, the prosecutor created a fictitious Facebook account and chatted with the defendant’s girlfriend about the case for several hours until he got the information he needed. Ultimately, the case was handed over to another prosecutor and it came to light that the previous prosecutor contacted the defendant’s girlfriend under the fictitious account. The prosecutor had violated Professional Conduct Rule 8.4(c) which prohibits an attorney from engaging in conduct involving dishonesty, fraud, deceit or misrepresentation. The prosecutor wanted the board to carve out an exception for him arguing that a comment to Rule 8.4 has an exception for lawyers who advise lawyers about lawful covert investigative activities. However, because the prosecutor himself was doing the covert act, the board found that the exception did not apply to him.

In another case, Bryant v. State, 2016 WL 4705157 (2016), appellant argued that legally insufficient evidence (photos found on his public Facebook page holding items that were stolen) supported his conviction of a robbery because the State did not introduce any evidence establishing that the appellant was at the victim’s home on the date of the robbery. The appellant argued that there were no witnesses to testify that he was at the scene of the robbery, that the Facebook photos were circumstantial evidence, and there was no DNA evidence to connect him to the robbery. The court overruled the appellant’s argument and decided to look at the combined and cumulative force of all the evidence. The court held that all of the circumstantial evidence was sufficient to support the appellant’s conviction.

In State v. Rund, 2016 WL 4162925 (2016), a 19 year old boy was pulled over by an officer and given a ticket for driving 68 miles per hour in a 60 miles per hour zone. The boy later sent tweets to his friend talking about the incident and how angry he was. The boy posted threatening statements to his friend discussing how he would kill officers and hashtagged the police department. The last part of his threatening tweets referenced a song. The boy was charged with the felony of making a terroristic threat. He quickly admitted he was wrong and showed great remorse. The court imposed a durational departure and modified his sentence from 3 years to two years because he did not really intend to kill officers and his conduct did not involve the typical crime for which he was sentenced. Threats from terrorists as noted by the court are typically face to face or received in the mail directly. I think it’s also interesting to note that the court found that some of the boy’s tweets were lyrics frequently used in gangster rap songs. “The use of language that expresses approval of violence against police, while disturbing in this case, may not indicate actual intent to kill a cop and may merely constitute a protest against police conduct.”

I’d like to discuss one last case that is sparking conversation. California Penal Code 182.5 states that any person who actively participates in a gang with knowledge that its members engage in a pattern of criminal gang activity and who willfully promotes, assists or benefits from the criminal activity of other gang members is guilty of conspiracy to commit that felony. It’s basically guilt by association. San Diego prosecutors charged two men with this penal code and said that they benefitted from murders allegedly committed by fellow gang members. Prosecutors alleged that one man got a boost in his music career because of the murders and the other man had Facebook posts with him flashing a gang sign and even has a gang sign tattooed on him. Prosecutors acknowledge that neither of the men actually committed the murders. There’s plenty of controversy surrounding this case because some feel that the men are basically being targeted for the content of their speech and there’s debate on how the term willfully should be interpreted. Also, this case raises a deeper issue of how police determine gang members and whether it would unfairly include young black men in gang prosecutions. The conspiracy charges were ultimately dismissed because the court found that the men did not willfully benefit from the killings. Penal Code 182.5 is a largely untested law and courts are entering new territory with its prosecution.

Some Questions to Think About:

-Do you think evidence gathered from social media should be used in court or should attorneys focus on gathering evidence another way?

-The Florida Rules of Civil Procedure have been amended to include discoverability of electronically stored information (ESI), but do not provide guidelines for the admissibility of ESI. Is the current framework sufficient? Should Florida amend the rules to also include the admissibility of ESI? If so, what kind of rules would you propose?

-Keeping in mind some of the challenges, what steps can be taken authenticate evidence from social media?

-The Stored Communications Act protects Facebook wall posts. What other types of information do you think the SCA should protect? Facebook chats, messages posted on a friend’s wall, status updates, posts that were previously public then made private?

-Do you think there are any circumstances in which prosecutors should be allowed to create fictitious accounts to gather evidence crucial to their case?

-What do you think about California Penal Code 182.5 and how prosecutors can gather evidence from social media to determine gang affiliation? Do you think it will unfairly target young black men? Is Penal Code 182.5 an attack on free speech?



Racial Bias in Online Search Engine Algorithms

•September 18, 2016 • 10 Comments

I want you to go to Google. In the search bar, type in “three white teenagers”. Hit search. Now click on the images tab. What do you see? Now do another images search, only this time search for “three black teenagers”. Again, what do you see? It is very likely that your search for “three white teenagers” populated a number of different pictures featuring glowing, fresh-faced, wide-eyed, happy-go-lucky Caucasian youngsters. One image might show a group of three young and vivacious white girls, hands on hips, posing for the camera. Another image might show another group of three white teens, smiling innocently into the camera while each holds a soccer ball, football, and basketball. It is also very likely that your search for “three black teenagers” populated only one variety of images: mug shots of black individuals.

But perhaps this is simply just a coincidence. Let’s try a different search, shall we? Try doing an image search for beautiful dreadlocks. What is your result? If it was anything like mine, you perhaps saw row after row of images featuring predominantly white individuals with different styles of dreadlocks. Hm, that’s curious. How about a search for “beautiful braids”? To quote Lewis Carroll: “Curiouser and curiouser!” This search populated even more images of white people than the last, all with various styles of braided hair. Well, this is interesting, particularly noting the widely-established fact that these two particular hairstyles (dreadlocks and braids) not only originated from within the black community but are still most commonly seen within the black community.

Well, let’s try this one more time. This time try a google image search for “professional hairstyles”. What do you see? Again, rows and rows of predominantly white women sporting various updos, ringlets, curls, ponytails, buns, and side sweeps. It seems as though any hairstyle donned by a white individual is deemed as professional, at least by Google. Now, let’s augment this search by typing in “unprofessional hairstyles”. What are the resulting images? Again, row after row of images featuring predominantly black women with a wide variety of different hairstyles.

Does this seem strange to you? It should.

Apparently this is not an uncommon occurrence, as major online tech companies are being criticized as a growing number of individuals have taken to social media to report issues of racial bias in the way these companies display and use information. Most recently, Google has come under fire for obvious racial discrepancies regarding their search results. In fact, less than a month ago your google searches of “black teenagers” would have been quite literally filled with only images of mug shots of young black people. However, these biases have not been isolated solely to incidents involving race, but they also involve issues regarding gender. These racial and gender biases can be seen most clearly and most frequently in three different forms of online programming: search engine algorithms, big data, and online ad delivery.

First of all, what is a search engine algorithm? In its simplest form, a search engine algorithm is a set of rules or a unique formula that the search engine uses to determine the significance of a webpage. [1] These formulas are unique to each individual webpage and can range in their level of complexity, with the most complex of these formulas (such as the ones used by Google) being the most coveted and the most heavily guarded. [2] Online search engine algorithms all have the same basic general construct. They all take into account the relevancy of a particular page (which analyzes the frequency, usage, and specific location of keywords within the website), the individual factors of a search engine (a common individual factor being the number of pages a search engine has indexed), and the off-page factors (such as the frequency of click-through rates). [3] Although most (if not all) online programs involve some type of search engine algorithm that assists in populating results for its users, the most common use of a search engine can be found on popular search engine websites such as: Google, Bing, Yahoo, AOL, and AskJeeves (for us old-timers).

Instances of online racial bias can be seen most clearly through search engine algorithms. Do you recall the experiment we conducted earlier? Similar searches conducted on different search engines produced varying results. For example, a similar search for “black teenagers” as compared to “white teenagers” using the Bing search engine produced different results than that of Google, with images of black teenagers being more consistent with that of the images of white teenagers. Similarly, a search for “three black teenagers” compared to a search for “three white teenagers” using the Yahoo search engine revealed the same stock photos of whimsical teenagers for the search of white teens, while the search for black teens revealed mostly snapshots of racially-biased Google search results for the same, as well as a few peppered images of happy, smiling groups of black teenagers. These results beg the question: is it simply Google that is racist?

Big data is another area in which we can see instances of online racial bias. A similar type of “search engine”, big data describes the large amount of data – both structured and unstructured – that inundates a business on a day to day basis. [4] Basically, big data works similarly to a search engine in that it compiles and stores large amounts of information  and creates suggestions and results tailored to that specific individual. A popular example of one type of use for big data is the Netflix movie suggestion algorithm, which takes the information of users based on what movie or show they just watched and suggests similar movies based on their views and the number of star ratings that the user gave a similar movie. Another example of big data usage is the popular shopping search engine website, Amazon, which compiles information from consumers based on pages they have viewed previously as well as items they have bought and suggests similar items.

The most recent issue with racial bias involving the use of big data suggestion algorithms involved Amazon. The popular e-commerce company recently implemented an upgrade to the Amazon Prime service and now offers Prime Free Same-Day Delivery, which provides Amazon Prime members with same day delivery of more than one million products for no extra fee on order over $35. [5] When Amazon started this new service, it was offered in 27 major metropolitan areas and provided broad coverage in most cities. [6] For example, in its hometown of Seattle, Amazon offered this new service to every zip code within the city, including surrounding suburbs. [7] However, in six major cities, these same services were not offered in zip codes with a high population of black citizens from low socio-economic backgrounds. [8] Which is ironic considering that these types of services would arguably be more beneficial to a black, struggling single mother who (between running from job to job and taking care of her children) is unable find the time or spare the bus fare to go to the store, locate an item, and purchase it, than it would be for a wealthy white yuppie who has no children, has their own vehicle, has the free time to take afternoon Zumba classes  and splurges daily on iced pumpkin spiced lattes from Starbucks.

Online ad delivery is another area where one can see bias and discrimination based on race as well as gender. Ad delivery is the process by which search engines and websites, due to sponsorships and funding from adword companies, display advertisements on their website in the form of picture links and keyword search links based on the content of a user’s search. Recently, Harvard professor Latanya Sweeney conducted a cross-country study of 120,000 internet search ads and found repeated incidents of racial bias. [9] Specifically, her study looked at Google adword buys made by companies that provide criminal background checks. [10] At the time, the results of the study showed that when a search was performed on a name that was “racially associated” with the black community, the results were much more likely to be accompanied by an ad suggesting that the person had a criminal record regardless of whether that person actually did. [11] Obviously, this typically produces adverse results and severely diminishes the employment opportunities for those individuals whose names are accompanied by ads suggesting that they might have a criminal record.

We can also see instances of gender discrimination in targeted online ad delivery.  For instance, Google’s online advertising system showed an ad for high-income jobs to men much more often than it showed the ad to women. [12] Also, research from the University of Washington found that a Google images search for “C.E.O.” produced results where only 11 percent of the images were actually women, even though 27 percent of United States chief executives are women. [13]

The impact that all of these forms of racial and gender discrimination have on current society operates on a subconscious level. It has been argued that these forms of racial and gender bias only serve to reinforce and reify these biases within individuals and society at large. A person searching for pictures of black teenagers and is instead bombarded with images of black teens in a criminal line-up will soon begin to think (or continue to think, if they already adopted this faulty mindset) that all black people are dangerous or criminals. Similarly, employers who are consistently inundated with ads suggesting that individuals with names typically associated with the black community have a criminal record will soon begin to assume that all persons with the name Jerome or Laquiesha have a criminal record. And women who are only targeted with ads promoting lower-paying jobs than men may begin to believe that these are the only jobs that are available or best suited to them. After all, if the internet said it then it has to be true, right?

But are these algorithms, in and of themselves, truly to blame? Although these search engine algorithms are largely self-correcting and self-sustaining, they are all created and programmed by humans. Humans may create these formulas to be unbiased, but they also (whether consciously or subconsciously) input their own human social and cultural stereotypes and biases within these formulas. One area of potential bias comes from the fact that so many of the programmers creating these programs, especially machine-learning experts, are male. [14] And white. Humans recreate their social preferences and biases, and algorithm and data-driven products will always reflect the design choices of the humans who built them. [15]  Once created, these systems simply “learn” from the now embedded information that was originally inputted by their human programmers.

So what does the law say about all of this? Does it say anything? Subsection (a) of section 1981 of the United States Code 42 provides that all persons within the jurisdiction of the United States shall have the same right in every State to the full and equal benefit of all laws and proceedings. [16] Later codifications of this code have included the Civil Rights Act of 1964. Title VII of this Act applies to most employers and prohibits employment discrimination based on race, color, religion, sex, or national origin, and through guidance issuance in 1973, now also extends to persons having criminal records. [17] Title VII does not prohibit employers from obtaining criminal background information; however, certain uses of criminal information, such as a blanket policy or practice of excluding applicants or disqualifying employees based solely upon information indicating an arrest record, can result in a charge of discrimination. [18]

In light of all of this, I would like to pose some questions to you. From the current legislation, there seems to be legitimate legal implications for those who have been found to be engaging in various forms of online discrimination. But who should actually be held liable for these types of abuses? The company, or the individual programmers, or both? Should liability even attach in these instances? Should there be a legally viable cause of action for individuals (or class of peoples) who claimed to have suffered from these types of virtual discrimination (think of instances like Amazon shipping practices or employers influenced by suggestive ads)? Is this even a legitimate issue that should be addressed? If so, what are some potential solutions to this growing problem?



















[16] 42 USC § 1981.








Payment Blockades in the Fight Against Cyberlockers

•September 10, 2016 • 10 Comments

File sharing is the simple term that is used to describe the complex array of processes and technology involved in transferring digital information through online networks. [1]. It can be encountered in some form on almost every website. Among its most popular applications are websites and programs that host files submitted by users, including copyrighted movies and music. Since at least the mid-nineties, file sharing has been in constant friction with artists and industry groups who have lost millions of dollars from violations of their intellectual property (IP) rights. This has prompted attempts by the Federal Government and private organizations to combat illegal file sharing, with differing degrees of success. This blog explores payment blocking, one of the more recent methods, and how it may develop going forward.

First, an overview of illegal file sharing for context. In June 1999, launched a peer-to-peer file service that specialized in the transfer and trade of Mp3 files. [2]. At its peak, the site had over 70 million subscribers, but by July 1, 2001, the company shut down its service in the fallout of an IP lawsuit by members of the Recording Industry Association of America. [3]. Napster was subsequently liquidated in 2002, following bankruptcy, and the company never recovered. [4]. In the years after Napster’s bankruptcy, the use of BitTorrent (a form of file transferring) gained widespread adoption and led to the creation of popular torrent hosting sites and clients such as The Pirate Bay and Limewire. [5]. In addition, other sites, such as (Megaupload), accumulated user-submitted files and made them available for direct download from servers across the globe. [6]. Over time, many of these file sharing systems, generally known as cyberlockers, have been taken down at least once (some permanently) by federal interventions. However, they have always managed to return in some form or another.

The two most popular forms are those that allow for direct downloads, like Megaupload, and those that share content through a steaming service (playing content without transferring the actual file to the user). [7]. Both forms typically generate revenue by selling advertising space, premium memberships, and related software. [8]. The greater the internet traffic, the more revenue these cyberlockers produce, and they produce a lot!

The federal indictment of Megaupload in 2012 alleged that, at one point, the site was the 13th most visited website on the internet. [9]. The indictment further alleged that Megaupload had generated over 150 million dollars in premium memberships and over 25 million dollars in advertising revenue. [10]. Meanwhile, much of the material generating traffic was copyright protected, and each download arguably reduced the value of the owners’ IP.

Overall, for sites that offer direct downloads, the percentage of files violating copyrights has been found to be in upwards of 78%, and as high as 83% for streaming services. In addition, files on cyberlockers tend to include harmful malware and other illegal content. A recent study by NetNames, a staunch advocate of IP rights, found that more than half of current cyberlockers were responsible for infecting computers with malware, at least some of which likely facilitated identity theft. [11]. Further, the indictment of Megaupload alleged that the site was the frequent host of child pornography and terrorist propaganda. [12].

The question remains then, what is being done to stop cyberlockers? In steps the payment blockade. Approximately 80% of online transactions involve the use of a credit or debit card. [13]. Therefore, major credit transaction companies, such as PayPal, Visa, and MasterCard, can effectively slow the flow of money into cyberlockers by refusing to offer them payment processing services. [14]. This limits the cyberlockers’ ability to generate membership revenues and has the added effect of diminishing their capacity to reward pirates for uploading illegal content.

Currently, there are “best practices” frameworks in place that allow IP holders to report IP rights violations to payment processors. [15]. These processes have many steps. One process, for example, requires that IP holders send cease and desist letters, collect evidence of infringement, identify and describe infringing products, and provide evidence of their IP rights – all before submitting a complaint. [16]. However, there is some concern that they may not be implemented fairly. [17].

These frameworks are extrajudicial; they lack the kinds of procedural guarantees that merchants would otherwise receive in a civil trial. [18]. The standard for evidence of infringement is limited only by the effort payment processors are willing to accept in order to ensure just outcomes. Further, the staff investigating and passing on these matters may have no legal background. There is also an alarming incentive to satisfy powerful IP holders to the detriment of smaller merchants, especially where protocols place the burden of proof on the alleged offender. [19]. There may even be the potential for conspiracies between IP owners and powerful merchants to accuse competing merchants to reduce competition. While there may be a requirement to report in good faith, [20] there are still many who might find a benefit in an efficient breach. Thus, the effect is to create a system where merchants could potentially face financial ruin for only minor IP violations with a review standard limited to the payment processor’s business judgment.

It should not come as much of a surprise that payment processors would want some assurance that they can identify and prevent illegal transactions. What is interesting is that the Federal Government has put political pressure on payment processors to adopt these frameworks instead of legislating in the area. [21]. There are many reasons why Congress and the Executive branches would stay clear of regulation, especially considering the cost of oversight. However, it is concerning that the government is encouraging the adoption of potentially unscrupulous practices.

What is of even greater concern is that the Federal Government may be encouraging more than just frameworks. There is credible evidence to suggest that major payment processors Visa and MasterCard were pressured to deny transaction services to WikiLeaks in the wake of the website’s 2010 leak of military documents. [22]. The denial of service came without formal charges levied against WikiLeaks, and the effect of the blockade temporarily reduced the site’s income by 95%. [23]. There was also recently some suggestion that the Motion Picture Association of America (MPAA) had placed pressure on Senator Patrick Leahy to convince Visa and MasterCard to target specific VPN servers that were allowing users to mask their IP addresses when illegally downloading movies from Megaupload. [24]. The situation is all the more curious considering Megaupload founder Kim Dotcom’s well-known feud with the MPAA. [25].

For now, there are many ways that users and cyberlockers can circumvent a payment blockade. For one, the cyberlockers will continue to earn revenue from the sale of advertising space. In addition, cyberlockers may accept bitcoin and utilize an army of shell companies to conduct transactions. [26]. These both allow users to funnel money into a source that cannot be easily traced to the IP violator. It is not yet clear how payment processors will tackle these issue, and it will be interesting to note what influence, if any, payment processors will bring to the development and proliferation of bitcoin.

I’ll leave readers with a couple of thoughts to consider. First, knowing the inherent risks, pressures, and limitations of payment blockades, should they continue unobstructed or should there be greater oversight of the process? It might help to remember that Congress has recently failed to pass both the highly publicized Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA). Does it change your mind at all to know that PIPA was introduced by Senator Patrick Leahy, and that there may be strong political pressures at play? How far should such a regulatory scheme go? Currently, the Digital Millennium Copyright Act (DMCA) allows for an aggressive takedown process that is comical when IP owners accidentally target themselves [27], but disturbing when the owners incorrectly target innocent fair users. Should that kind of oversight concern voters when considering future legislation?

Second, should we be advocating for greater penalties against payment processors for supporting IP theft? Surly, the credit card companies have made plenty of money from these illegal transactions and a simple search online would have revealed their obvious contributions to IP rights violations. Why then do they deserve a gold star for implementing a blockade instead of a penalty for past offenses? It is possible to sue for contributory and vicarious infringement of copyright and trademark protections. [28]. These four claims were addressed in Perfect 10, Inc. v. Int’l Serv. Ass’n, where Visa was sued for providing processing services to a cyberlocker.

For contributory copyright infringement, the standard is whether the intermediary “induces, causes, or materially contributes to that infringing conduct.” [29]. The standard for knowledge is actual or constructive and must be based on a “specific instance of infringing activity.” [30].  The Court held that because Visa was not essential to the actions of reproducing and distributing the IP, they made no material contribution to the violations.


In the case of contributory trademark infringement, the standard is “intentionally induced an underlying direct infringement or continued to supply either a service or an infringing product to a direct infringer while knowing of the direct infringement.” [31]. Here, the Court held that Visa lacked direct control of the infringing website and could only influence their behavior by limiting service. [32]. By the same reasoning, the Court also rejected the claim for vicarious trademark infringement because Visa lacked actual or apparent partnership with the direct infringer with authority to bind them in transactions. [33].


Finally, the standard for vicarious copyright infringement is whether the intermediary “had, but declined to exercise, the right and ability to control or supervise a direct infringer from whose actions it directly profited.” [34]. Again, the Court found for Visa reasoning that the ability to influence infringement by financially punishing an infringer is different from the ability to supervise and control the infringer. [35].

File sharing comes with a moral divide. On the one hand, it has provided for legitimate and cutting edge applications of online media, opening up the door to research databases, social networks, and cloud computing. While, on the other hand, it has facilitated the illegal pirating and distribution of intellectual property on a global scale. How we respond to the later will largely dictate the freedom and the development of the former. That is why we should keep a close watch on the development of payment blockades, especially if they are given the full support of the Federal Government.







[6] (at paragraphs 4 and 6)

[7]  (at page 1)

[8]  (at page 3)

[9] (at paragraph 3)

[10] (at paragraph 4)

[11]  (at page 10)

[12] (at paragraph 24)

[13] (at page 1526)

[14] (at page 1526)

[15] (at page 1528)

[16] (at page 1550)

[17] (at page 1560)

[18] (Id.)

[19] (See page 1560)

[20] (See page 1553)

[21] (See page 1527)

[22] (See page 1524-25)

[23] (page 1525)





[28] (See page 1531-36)

[29] (page 1531)

[30] (Id.)

[31] (at page 1533)(quoting Perfect 10, Inc. v. Visa Int’l Serv. Ass’n, 494 F.3d 788, 807 (9th Cir. 2007))

[32] (at page 1534)

[33] (at page 1535-36)

[34] (at page 1534)

[35] (at page 1535)


Mortgage Fraud Through the Straw Person Scheme

•September 5, 2016 • 10 Comments

What is real estate? As defined on Investopedia, real estate is property that is comprised of land and the buildings on it. For most people, the American Dream is to own their own piece of real estate which also usually happens to be a person’s most valuable asset. Throughout the years’ people have come up with a variety of ways to misrepresent information to obtain an advantage in the real estate transaction. There are many forms real estate fraud can take; these range from mortgage fraud to the straw-man scheme. Here is a list of several types of real estate fraud that are out there (please keep in mind that this is not an exhaustive list and there are many other forms real estate fraud can present itself in) (5 Okla. J. L. & Tech. 44):

  • Mortgage Fraud: Occurs when a borrower makes a material misrepresentation or omission on a loan application for which a lender relies upon to give out the loan. Had the lender been aware of the actual facts of the surrounding circumstances, the loan would either not have been approved or the borrower would have received a smaller loan amount.
  • Flipping: Flipping homes is not an uncommon practice. There is a fine line between legally flipping homes and doing so illegally. A legal scenario is when a buyer buys a home for what is considered a cheap price. After renovating and/or fixing the home, the buyer then sells the home for a profit. The whole process generally happens within a short time period. The buyer crosses the line from legally to illegally flipping homes when he/she is able to inflate the appraisal price (with the help of a professional) and receives more than what the market value of the home should be.
  • Equity Skimming: This type of fraud occurs when the buyer is able to convince the seller to re-list the home for a higher sales price than initially asked for. In doing so, the buyer is able to obtain a larger mortgage than he/she otherwise would have from a financial institution. After the buyer obtains the loan, he/she pays the seller the amount of the original asking price and the buyer keeps the remainder of the loan.
  • Straw-Man Scheme: Involved with the process to defraud is usually a professional (i.e. a real estate agent or broker, a mortgage broker, and/or a notary). There may be others involved such as the buyer, seller, and the straw person; however, the straw person being used may not actually have any knowledge of the fraud that is about to occur. The straw person is someone that currently has good credit. They are generally a close relative or friend of the buyer who has bad credit. The name and personal information of the straw person is used on all of the documents involved in the transaction. After the loan is secured, the professionals involved and/or the buyer flip the home for a profit. Unfortunately for the straw person, they never see any of the loan proceeds and are left with assumption of the mortgage, their credit may go bad, and they may face criminal charges.
    • This scheme closely resembles “flipping” with a major difference being that a third party is using their personal information to assist a buyer in obtaining the loan to buy a house. From there, the house gets flipped for a profit and the straw person is left to fend for him/herself.

In 1999 the Uniform Electronic Transactions Act (UETA) was passed. Section 7 of this act states that “a record or signature may not be denied legal effect or enforceability solely because it is in electronic form.” This was the first act which allowed the creation of deeds, mortgages, and other documents to be created by electronic means because now, e-signatures were enough to satisfy the signature formality. An electronic signature is considered any identifying electronic sound, symbol, or process that is adopted by the signor with the intent of signing the document that the signature is placed in. UETA was a big step in the push for having transactions occur solely online instead of in person and through paper documents. (5 Okla. J. L. & Tech. 44).

Following the adoption of UETA was the Electronic Signature in Global and National Commerce Act (E-Sign). E-Sign is a federal law that was created with the goal of encouraging those states which have not adopted UETA to adopt it. E-Sign gave legal effect to signatures, contracts, and other transactions that were in electronic form and affected interstate and foreign commerce. Both acts defined and treated “electronic signatures” the same. Their differences were in how the acts responded to the need to correct a mistake in a document and how each act treated consent by the parties. UETA gives the parties the option to either correct or disregard an error in a contract while E-Sign has no provision to allow a party to correct a mistake. For an electronic signature to constitute consent, both acts allow consent to be inferred from the parties’ conduct; however, the intent to consent is a critical element of UETA. In UETA both parties must agree to use electronic forms. (5 Okla. J. L. & Tech. 44).

A major problem with UETA and E-Sign was that neither addressed the issue that a recorder’s office would not record e-documents in the public record. Both acts addressed transactions which were yet to be included in those items which could be recorded. In response to this issue, the Uniform Real Property Electronic Recording Act (URPERA) was passed. This act laid the groundwork for states in creating a uniform system of how to deal with the electronic recording of real estate transactions. (5 Okla. J. L. & Tech. 44).

Before the acts affecting e-recording came to fruition, transactions that were conducted with paper documents could take weeks, even months to be completed. With the aid of computers, such transactions could now take several days, hours, or even minutes. What are the implications of this? More transactions started being conducted via electronic means. Hackers and/or criminals that appropriate a borrower’s identity or commit some form of real estate fraud can get away with committing more fraud in a shorter period of time before someone notices that their identity was stolen or before a professional or institution may notice there has been a misrepresentation on a loan application.

Lets talk about how mortgage fraud fits into the process of e-recording. As defined by §817.545, mortgage fraud occurs when a person knowingly and with the intent to defraud makes or uses and facilitates “any material misstatement, misrepresentation or omission during the mortgage lending process” intending for the mortgage lender to rely on such misstatement, misrepresentation, or omission. The statute also covers circumstances where a person receives any proceeds in connection with the mortgage lending process which the person knew was a result of a violation.

As stated earlier, a straw person may be paid for the permission to use their name and personal information in a transaction. The straw person is usually one who can obtain better loan terms than the intended buyer. A straw person may be aware about what they are agreeing to or, like many unsuspecting Americans, they are enticed by the opportunity to make easy money and do not find out what they have consented to until it is too late.

So how does the mortgage fraud statute fit in with the straw person scheme? There are companies that reach out to people who have good credit in an attempt to get their consent in using their identification and personal information. Those companies will usually pay a person around $5,000 to $10,000 for the ability to use their name and personal information. Unfortunately, many people do not consider the consequences of giving such information when they have the option of receiving what they consider a large sum of money. The information these companies acquire are used in mortgage loan applications, generally with the help of professionals in the real estate market.

The information in the mortgage loan application that uses the straw person’s identity and personal information is a material misrepresentation in which the financial institution relies upon and invokes §817.545. Due to the misrepresentations, the lender is under the misbelief that the straw person will be able to pay back the loan due to his/her good credit, among many other misrepresentations. These misrepresented documents are eventually sent into the recorders’ office to be electronically recorded. The straw person is now liable for the mortgage he/she never intended on securing because their name is attached to the mortgage loan application via an e-signature in which someone merely typed out their name within a document.

If there is a backlog at the recorders’ office, a perpetrator or company can fraudulently continue taking out several mortgages on the same or different properties using the same straw person’s information. This is because a loan has already been secured but since it has yet to be recorded, there is no notice being imparted to other financial institutions and even the person whose identity and personal information is being used.

Before a document may be recorded, the signature and notary formalities must be overcome. Such requirements are put in place to help prevent fraud. But such formalities are not hard to get around. The signature formality can be overcome by inducing a person to sign, whether it be under duress or deceit, or forging the signature either on a paper document or an e-document. It is possible to forge a signature on both a paper document (by physically forging the signature) or on an e-document (by typing out a person’s name in the required field). The notary formality may be overcome by either finding a notary who is willing to be involved in the scheme to defraud or finding an incompetent notary.

In 2014, Lawrence Wright, a Florida resident received a sentence of 75 months in prison and was ordered to pay restitution of over $3.7 million after being charged with identity theft, making a false statement to a financial institution, and other counts. Wright enlisted the help of several straw buyers by promising them that he would make payments on the loans and eventually flip the property and share the profit with them. Wright had the straw buyers use the name of his ex-wife on the loan documents. Hopefully the straw buyers knew what they were doing was illegal because Wright was certainly aware of his actions.

Regardless of which type of fraud a perpetrator is trying to commit, there are steps to help prevent fraud. ALTA (American Land Title Association) has laid out several best policy practices to help ensure companies and firms are doing what they can to prevent fraud in the real estate transaction process and to ensure they will not become a victim of fraud themselves. Several of the policies they laid out are as follows: 1. Establish and maintain current licenses; 2. Adopt and maintain written procedures and controls for Escrow Trust Accounts; 3. Adopt and maintain a written privacy and information security program to protect Non-public Personal Information; and 4. Adopt and maintain written procedures for resolving consumer complaints; among other recommendations.

There are also a plethora of steps that the average American can do in order to prevent becoming the victim of fraud. What are several things you can think of that the average American can do or be on the lookout for to prevent becoming a victim of fraud?

As mentioned within the brief description of what “flipping” is, not all circumstances of flipping a home is illegal. Can you think of an example in which using a straw buyer is legal?