Tech Accountability:

A conversation with two attorneys leading the charge to defend victims of online harms

Yael Eisenstat
Betaworks

--

Who bears responsibility for the real-world consequences of technology? In the wake of the recent congressional “Big Tech” hearing and President Trump’s Executive Order attempting to strip Facebook and Twitter of their legal protections, the debates around defining social media accountability and how and if the government should regulate social media companies are heating up.

With these questions in mind, I invited Carrie Goldberg and Peter Romer-Friedman, two attorneys who have taken on some of the biggest cases on behalf of victims of online harms, to discuss the current state of play and help our Betaworks community of entrepreneurs think more critically about the products they are building and the guardrails they can put in place to protect users, regardless of whether the law obligates them to.

A key thread throughout our conversation was that one piece of legislation has been broadly interpreted and repeatedly used to hinder victims’ abilities to seek recourse: Section 230 of the Communications Decency Act, the 1996 legislation that provides immunity from liability to platforms that host third-party content. Despite its well-meaning intentions to help a burgeoning internet flourish and give internet companies leeway to moderate content as they see best, this almost 25-year old legislation looms over not only considerations of how to handle online speech, but also of platform accountability for how their tools are used. It has been vastly over interpreted as a blanket immunity shield, so that internet companies are not held accountable for what happens offline that may have resulted from online activity.

In a nutshell: If we ever want to find a solution for how to balance free speech and an open internet with protecting people from abusive and illegal behavior, then we must shift this debate to focus more on the tools provided by these companies than on the actual speech of any one individual.

Goldberg and Romer-Friedman offered us some valuable insights from the cases they have prosecuted and some great advice to the next generation of technologists. You can watch the full conversation here:

A few highlights from the conversation:

An outdated legal regime enabling bad, illegal, or just “lazy” tech behavior

Goldberg built a law firm to represent victims following her own experience dealing with online stalking and revenge porn. She explained that she rarely sees cases of intimate partner violence or stalking that don’t have a large component of internet involvement. But Section 230 is immediately invoked by tech companies when victims try to have their day in court. One of Goldberg’s cases, Matthew Herrick v Grindr (well worth reading about!), challenged this notion that tech companies should be allowed to hide behind Section 230 immunity when their product is used to facilitate dangerous behavior.

As she said in our panel discussion, “Mainstream tech has become lazy. They can stand on the idea that they will not be held responsible for the bad stuff that happens to users. There are no other industries that do not face responsibility for the products they develop.” She was hoping to change that through her case against Grindr, suing “for its own product defects and operational failures”.

As the head of his firm’s civil rights practice, Romer-Friedman looks at cases where online tools are used to engage in what he calls “digital discrimination”: using people’s protected status to target them with ads or, even worse, exclude them from ads about job opportunities, housing, and credit. As he explained, laws such as the fair housing act mean nothing if they can’t be enforced because of the internet. Facebook’s ad tools, he added, are designed to be used to discriminate, but the company claims Section 230 immunity even in these cases.

He has successfully negotiated settlements against Facebook for enabling discrimination on its ads platform and even got Facebook to enact changes to prevent discrimination, including creating a special mandatory portal for the creation of job, housing, and credit ads without discriminatory filtering options.

Much of the internet governance debate gets caught up around “free speech”, with many arguing that any reforms to Section 230 will be an assault on free speech. Goldberg explained that Section 230 has been interpreted so broadly that it has stretched well beyond free speech. She spoke of one client whose daughter met a man online and was murdered on their first date. He was a registered sex offender with domestic violence orders against him, but he was able to use the dating app. As she explained it, the company has no incentive to ensure their product is stopping predictably dangerous people from using their product to then do dangerous things. “Section 230 is not just protecting free speech, it is protecting dangerous behavior.”

So where does that leave start ups who want to build safe and ethical products?

We received some great questions from the audience about what start ups can do to foresee potential dangers and solve for them before launching their products. The reality is, the current landscape for internet companies, particularly in the social media or user-to-user interface sphere, does not incentivize these companies to slow down, map out their potential risks early on, and build in guardrails to protect their users from potential abuse. It has to be an intentional leadership decision.

Using examples from their own cases, Goldberg and Romer-Friedman offered tips on how to think through ways the products could be used to victimize or discriminate against people, and build in policies and procedures from the start to protect even the least sophisticated user.

Romer-Friedman emphasized the importance of hiring a diverse team. If your team truly encompasses different perspectives, backgrounds, and lived experiences, they will be more likely to spot these problems early on.

But all agreed that the current legislative/regulatory landscape does not provide the necessary incentives for internet companies to protect their users and take these steps, so it has to be a leadership decision at the very foundation of your company. As Romer-Friedman stated, our legislators should at least come up with a baseline of rules that promote responsible actions, but beyond that, it is still up to corporate leaders to commit themselves to organizing and structuring their businesses in a responsible way.

The bottom line is: if we want to create a healthier and safer information ecosystem — which is one of the goals of our Betalab program — it will require a fundamental shift in how social internet companies prioritize the safety of every individual user over pure growth and scale. And it will take funders to recognize the value in that proposition and support companies through that growth model.

Since we cannot hold our breath and wait for government to figure this out, we hope to instill in our community of founders and entrepreneurs that just because the law doesn’t compel you to be a good steward of the public’s trust and safety doesn’t mean you shouldn’t be.

Betalab is an early-stage cohort-based investment program combined with a year-long series of workshops and events at Betaworks — with the singular goal of catalyzing startup activity around Fixing The Internet. Betalab will find and fund a select group of entrepreneurs who are building software and services that work for humans first, and finding ways to fix the things that are fraying today. Learn more at betalab.com.

--

--

Tackling the intersection of tech, policy & society. Fighting for democracy. Future of Democracy Fellow at Berggruen Institute. More at www.yaeleisenstat.com