Applying Industry Integrity Experience to Tech Reform: An Interview with IDDP Fellow Sahar Massachi

August 11, 2023

Sahar Massachi

The Institute for Data, Democracy, and Politics (IDDP) is excited to welcome its latest fellow, Sahar Massachi. Bringing a wealth of experience from his work on platform integrity, Massachi offers a unique perspective on the critical issues of online misinformation and its impacts on democracy. Massachi previously worked at Facebook on the civic integrity team and before that, he ran data for fundraising at Wikipedia, founded two prosocial startups, and served as the founding data scientist at Grovo Learning. These roles have given him first-hand exposure to the challenges and potential solutions in information technology, a viewpoint that he believes has been largely absent from broader public and policy discussions.

So in 2021, Massachi, together with Jeff Allen, founded the Integrity Institute, which serves as a professional community and think tank for people like him–those with current or previous experience working inside tech companies on integrity, trust and safety issues. In joining IDDP as a fellow, Massachi aims to combine his industry expertise with IDDP's strong grounding in academic and policy realms to advance a shared goal of fostering healthier online spaces.

At IDDP, Massachi is eager to shift gears from being a leader to an individual contributor, allowing him to focus on his intellectual ideas and experiences. His enthusiasm for joining the "supergroup" of fellows at IDDP - a team that bridges academia and policy-making - is palpable. One of the projects he's excited about involves examining transparency reports platforms produced as part of the European Union’s Code of Practice on Disinformation. His aim is to assess the reports’ usefulness and discuss the kind of data that is most beneficial to society. As we continue our conversation with Massachi, he offers more details about his vision, his understanding of platform integrity, and how he plans to contribute to IDDP's mission.

Q: Could you describe the role and significance of IDDP in your work and in the broader digital landscape?

Sahar Massachi: IDDP is a bridge. It makes connections between academic spaces and policy-making environments. It has roots firmly planted in academia, yet it makes substantial strides in the regulatory and legislative domains. It amazes me how IDDP manages to extend its influence into so many areas. The organization brings together representatives from different places (different areas of academia and different types of jobs) to share knowledge and perspectives, effectively serving as a guide in navigating the complexities of internet-related issues.


Q: How do the team of fellows at IDDP help foster collaboration between different sectors?

Sahar Massachi: The team of fellows at IDDP is great, and we each represent a different perspective pretty well. It's like a meeting point where representatives from various sectors come together to share insights, experiences, and ideas. Collectively, we can serve as guides to many people covering these worlds. And also – guides to each other! I’m so excited to be learning from Brandon [Siverman] and Anna [Lenhart]. (And also Jeff [Allen], Sahar’s cofounder at the Integrity Institute], but we’re already in contact pretty often). 


Q: What changes would you suggest to better link research to policy in platforms like Facebook and YouTube?

Sahar Massachi: People who do research come from different academic disciplines, not to mention different perspectives in and outside of industry. It’s hard to keep up with, much less engage with thoughtfully. So there's a pressing need for a guide - someone who understands the intricacies of these different worlds and can help distill what's most important. With such a guide, people who can make decisions at companies can  understand the ideas coming from that world, and find the ones that are actually useful. That's a role that IDDP excels at, and we're working towards maximizing its potential. But also – the “policy” function in platforms isn’t the only place I’d recommend trying to link to builders, product people, threat intel, operations, and more. 


Q: What does the role of AI look like in monitoring and moderating content on these platforms?

Sahar Massachi: AI certainly has a role in content moderation, and I imagine companies will rely on it more and more for scaled review of content. It's also not a standalone solution. As platforms “outsource” moderation work to AI, attackers can use AI to get around the rules – it’s a kind of coevolutionary process where both the human and the machine learn from each other. So there’ll be a cat and mouse game of rewriting content rules so they can’t be gamed. At the same time, the solution can’t be AI. We have to understand how the platforms incentivize bad behavior, and how to remake the platforms to incentivize good behavior instead.  


Q: Do you believe that the public should have a say in how these algorithms are designed?

Sahar Massachi: The algorithms (and platforms) we design have far-reaching implications, impacting millions of users. Currently, the decision-making process is largely left to either a group of lawyers and PR people, or a small group of engineers, depending on the type of decision and platform. I strongly believe that a wider array of perspectives should be included in these discussions.  At its core, power is being wielded by companies – power to affect people’s lives and whole societies. That power needs legitimacy: either it is anchored in a trusted and defensible open process run by the company, or it’ll be wrested back by the state. 


Q: What is your approach to balancing freedom of expression with ensuring integrity in online spaces?

Sahar Massachi: People talk about the tension between freedom of expression and content moderation. In the day-to-day of integrity work, the focus tends to be more about behavior than content. We are debating how much weight to give retweeting, or scrutinizing how to make it harder for users to spam. The questions we grapple with are often about design choices and systems. This is the way to safeguard freedom of expression; our work tries to increase quality while at the same time steering clear of individual whack-a-mole or fraught conversations on the merits of any particular post.