Theme Panel: The Governance of Emerging Technologies: Platforms in Politics

The Governance of Emerging Technologies: Platforms in Politics

Full Paper Panel

(Chair) Peter John Loewen, University of Toronto; (Discussant) Wendy H. Wong, University of Toronto; (Discussant) Simone Chambers, University of California, Irvine

Session Description:
Reliance on digital platforms such as Zoom and Microsoft Teams has been deepened by the outbreak of a global pandemic. This panel engages with the need to theorize and measure the impacts of platforms on governance and governments and explores existing strategies from across the political science literature to begin this work. Online services like WhatsApp, Twitter, and Google search have, for many, become integral to everyday life. Correspondingly, the companies that provide these services have become ubiquitous. These multinational platform companies are, among other things, essential communication infrastructures, arbiters of freedom of expression, and distributors of information. Platform companies are governors. At the same time, they are subjects of governance, targeted by states, NGOs, and intergovernmental organizations. Such governance efforts include restricting the collection of data about users (e.g., EU’s GDPR, the Global Network Initiative), limiting applications of artificial intelligence (e.g., banning facial recognition in law enforcement), and challenging the concentration of market power in a small number of organizations (e.g., US antitrust suits). Perspectives in political science regarding functional and hybrid governance, algorithmic governance, transnational corporate governance, the role of policy networks, and democratic theory have much to contribute to this emerging area of research and policy.


Who Governs in the Digital State?
Amanda Clarke

Information technologies (IT) have always played a central role in government. They shape and reinforce the organization and management of the public sector. Information and data, and technologies that allow for their collection, analysis and application, are also key policy resources that determine the extent to which a government can effectively design policies and deliver services. Yet, the centrality of IT to governance has long been neglected both in the political science sub-field of public administration and by public administrators themselves. Private sector actors have been more conscious of the centrality of IT to the activities of governing and have since the 1980s benefited from widespread IT outsourcing and consulting contracts to become influential and well-remunerated private governors of public sector IT infrastructures. Repeated high-cost failings of government IT projects, and a growing recognition of the benefits of modern design practices, have inspired a counter-movement which from 2010 onwards has attempted to curb government reliance on private technology firms and management consultants, and instead to build in-house public sector digital capacity. Focusing on Canada, the United Kingdom and the United States, this paper evaluates the extent to which this counter-movement is succeeding to limit the private capture of public sector IT, especially in light of the urgent demands for digital capacity ushered in by the COVID-19 pandemic. The paper reports on public servants’ testimonies about the role, power and influencing tactics enrolled by private technology and consulting firms and analysis of government contracting patterns on pandemic-related initiatives (e.g. contact tracing applications, vaccine booking systems, and benefits delivery). Beginning from the premise that those who govern public sector IT become de facto governors of government processes writ large, the paper’s analysis appraises how accountability and power within the digital era state is distributed across public and private actors and contributes new insight to the literatures on public sector accountability and corporate capture, digital government reform, and the role of private technology firms in contemporary democracy.

Platform Sovereigns: Contact Tracing Applications and the Power of Big Tech
Jamie Duncan, University of Toronto; Alexandra Martin, University of Toronto

Can a smartphone app cure a pandemic? During the COVID-19 pandemic, we have seen attempts to turn our smartphones into public health surveillance tools in our pockets through the use of exposure notification (EN) applications. This paper examines the introduction of the Google-Apple Privacy-Preserving Contact Tracing (GA-PPCT) protocol as a case of coercive policy diffusion. We ask: how did we go from learning of the novel coronavirus to the implementation of a far-reaching network of interoperable EN apps within mere months? We demonstrate how the infusion of technological solutionism with emergency public health responses contributed to the ongoing erosion of state authority to act upon problems defined as technical by bureaucrats and the private “expert class” of big tech. Drawing on nascent literature in platform governance as well as established research on policy transfer and New Public Management, we contend that understanding technology policy diffusion requires recognizing how such platforms govern through the provision of global standards. We investigate the confluence of demands for rapid public health measures and the control of smartphone platforms by two companies. We show how the private development of the publicly-deployed GA-PPCT protocol enabled its rapid global spread. This case study also shows us the dangers of overreliance on technical solutions for problems that are not only medical, but social and political in nature. Narrowly framing exposure notification apps using discourses of privacy allowed Apple and Google to form temporary strategic alliances with privacy advocates in academia and civil society. Such alliances heightened the perceived legitimacy of their coercive approach to engaging with state authorities, some of whom wished to pursue alternate technical tools. Conversely, the strategic redefinition of political problems as primarily technical issues can also serve the priorities of states. By accepting the expert authority of the GA-PPCT in setting technical parameters for EN applications, states benefit from a narrowing of policy alternatives and expedition of desirable policy outcomes–in this case by enabling EN apps to function more efficiently on consumers’ devices and ensuring the interoperability of EN applications across devices and national boundaries. Approaching privacy as a technical good encourages solutions oriented toward product design and customer experiences allowing states to not only outsource functions but also accountability for their efficacy. To understand the role of private influence and expertise in the development and roll out of EN apps in Canada, the United States, France, and the United Kingdom, we analyze technical and policy documentation as well as executive communications related to these apps from March 2020 through November 2020 to observe how the GA-PPCT framework was adopted, challenged, and implemented. Our contribution is twofold. Firstly, we document how the self-constraining impulse toward smaller government has allowed for the emergence of new forms of private sovereignty and technocratic governance exemplified by Google and Apple’s capacity to impose global standards such as the GA-PPCT. Secondly, as a consequence of this novel form of authority, we propose that research on technology policy diffusion requires a theoretical lens informed by platform governance, which can account for how states and platforms co-exist as governors-that-are-governed on a global scale.

Platform Governance: The Domestic Politics of Online Content Regulation
Robert Gorwa, University of Oxford

Billions of people around the world use services like Facebook, Twitter, Instagram, and YouTube every day to access information, engage in conversation, and stay in touch with friends and family. These hugely profitable and popular platforms for user-generated content, operated by large multinational technology companies, have in the past decade created complex systems of private regulatory standards that govern online behavior and have a significant impact on the social, cultural, and political lives of their customers around the world. Where these systems were once tacitly accepted or ignored by state actors, governments have in recent years increasingly sought to shape the rules and practices deployed by platform companies through various strategies. In some cases, governments have sought to ‘take back control’ and re-assert state authority over this privately managed domain, while in others they have opted rather to work directly with companies in a more collaborative fashion. What explains the variation in how governments intervene in platform governance? The paper argues that how governments seek to shape, challenge, or contest private platform rulemaking can be understood as either fitting in within a collaborative or a contested strategy. Building upon literatures from global regulatory politics, especially upon Farrell and Newman’s work (2018, 2019) on international/domestic issue linkages in transnational technology policy negotiations, variation between these two strategies is explained as the result of an interplay between three factors: domestic demand for change, the ability to supply that change (regulatory capacity and transnational or domestic institutional constraints on that capacity), and normative understandings of an actor’s appropriate degree of policy intervention. The conceptual argument, which is being developed as part of a larger ongoing book project, is presented here through one chapter: an in-depth case study of developments in intermediary content regulation in the United States (2006 to the present). Drawing upon qualitative interviews conducted with governance stakeholders (firms, policymakers, civil society) and deliberative policy documents obtained via FOIA requests, the paper shows the non-emergence of successful content moderation focused legislation in the US can be understood best through a combination of unique normative constraints (especially those posed by the First Amendment and American free speech traditions), and generally low levels of domestic demand for new platform-focused rules due to the lobbying and public-facing strategies of industry. Beyond the general theoretical framework presented, this paper seeks to make two specific contributions. Empirically, the paper highlights the domestic factors that help explain why despite the massive increase in public salience that ‘big tech’ related issues have received in the past five years both in the US and beyond and growing regulatory burden that American multinational technology companies are facing in dozens of jurisdictions, there has been no corresponding regulation in the United States. Conceptually, the paper advances current discussions on the importance of domestic politics for international regulatory issues, and the specific circumstances under which we can expect domestic politics to matter in key regulatory episodes (Bradford 2020; Farrell and Newman 2019).

Assessing Global Regulatory Responses to Facebook’s Political Harms
Swati Srivastava, Purdue University

As governments around the world mobilize to rein in Big Tech, researchers lack high-quality systematic data for comparative assessment of state responses. This paper analyzes the robustness of over 900 global regulatory responses for effectively targeting Facebook’s political harms of mass surveillance, speech management, information pollution, and behavioral conditioning. The research is drawn from an original database of 4,315 major Facebook incidents since the company’s founding (2004) until February 2021. Instead of running keyword searches through automated text analysis, student coders carefully read over 50,000 news reports to extract government responses to Facebook’s incidents ranging from inquiries, hearings, and proposals to legislation, lawsuits, and administrative rulings. The paper groups the regulatory responses under three themes – privacy, anti-monopoly, content moderation – and analyzes their scope and strength using an inductively derived coding scheme modeled after the OECD’s Indicators of Regulatory Policy and Governance. The research makes two contributions. First, it maps a fuller spectrum of Big Tech regulatory scrutiny beyond high-profile legislation such as the General Data Protection Regulation and jurisprudence such as the Right to be Forgotten. Second, it identifies major regulatory gaps for developing and enforcing solutions to counter Big Tech’s power in a transnational context.

Be the first to comment

Leave a Reply

Your email address will not be published.