Commentary

Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy

High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

(Re)Defining Social Media as Digital Communication Networks

This article originally appeared in TheQuint with the headline 'We Need a Better Definition for Social Media To Solve Its Problems.' An excerpt is reproduced here.

The Need For a New Term

Conversations around ‘social media platforms’ also tend to fixate on specific companies, the prevalence of certain types of information on their platforms (misleading information, hate speech, etc.) and their actions in response (content enforcement of community standards, applications of labels, compliance with government orders, etc.). While this is certainly relevant, it is out of step with the nascent yet growing understanding of the reality that most users, and especially motivated actors (whether good or bad), operate across a range of social media platforms. In the current information ecosystem, any effects — adverse or positive — are rarely limited to one particular network but ripple outwards across different networks, as well as off them.

There’s nothing wrong with an evolving term, but it must be consistent and account for future-use cases. Does ‘social media platforms’ translate well to the currently buzz-wordy ‘metaverse’ use-case, which, with communication at its core, shares some of the fundamental characteristics identified earlier? Paradoxically, the term ‘social media platform’ is simultaneously evolving and stagnant, expansive yet limiting.This is one of the reasons my colleagues at The Takshashila Institution and I proposed the frame of 'Digital Communication Networks' (DCNs), which have three components — capability, operators and networks.Read More

Read More
High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

Tackling Information Disorder, the malaise of our times

This article was originally published in Deccan Herald.

The term ‘fake news’ – popularised by a certain world leader – is today used as a catch-all term for any situation in which there is a perceived or genuine falsification of facts irrespective of the intent. But the term itself lacks the nuance to differentiate between the many kinds of information operations that are common, especially on the internet.

Broadly, these can be categorized as disinformation (false content propagated with the intent to cause harm), misinformation (false content propagated without the knowledge that it is false/misleading or the intention to cause harm), and malinformation (genuine content shared with a false context and an intention to harm). Collectively, this trinity is referred to as ‘information disorder’.

Over the last 4 weeks, Facebook and Twitter have made some important announcements regarding their content moderation strategies. In January, Facebook said it was banning ‘deepfakes (videos in which a person is artificially inserted by an algorithm based on photos) on its platform. It also released additional plans for its proposed ‘Oversight Board’, which it sees as a ‘Supreme Court’ for content moderation disputes. Meanwhile, in early February, Twitter announced its new policy for dealing with manipulated media. But the question really is whether these solutions can address the problem.

Custodians of the internet

Before dissecting the finer aspects of these policies to see if they could work, it is important to unequivocally state that content moderation is hard. The conversation typically veers towards extremes: Platforms are seen to be either too lenient with harmful content or too eager when it comes to censoring ‘free expression’. The job at hand involves striking a difficult balance and it’s important to acknowledge there will always be tradeoffs.

Yet, as Tarleton Gillespie says in Custodians of the Internet, moderation is the very essence of what platforms offer. This is based on the twin-pillars of personalisation and the ‘safe harbour’ that they enjoy. The former implies that they will always tailor content for an individual user and the latter essentially grants them the discretion to choose whether a piece of content can stay up on the platform or not, without legal ramifications (except in a narrow set of special circumstances like child sex abuse material, court-orders, etc.) This of course reveals the concept of a ‘neutral’ platform for what it is, a myth. Which is why it is important to look at these policies with as critical an eye as possible.

Deepfakes and Synthetic/Manipulated Media

Let’s look at Facebook’s decision to ban ‘deepfakes’ using algorithmic detection. The move is laudable, however, this will not address the lightly edited videos that also plague the platform. Additionally, disinformation agents have modified their modus operandi to use malinformation since it is much harder to detect by algorithms. This form of information disorder is also very common in India.

Twitter’s policy goes further and aims to label/obfuscate not only deepfakes but any synthetic/manipulated media after March 5. It will also highlight and notify users that they are sharing information that has been debunked by fact-checkers. In theory, this sounds promising but determining context across geographies with varying norms will be challenging. Twitter should consider opening up flagged tweets to researchers.

The ‘Supreme Court’ of content moderation

The genesis of Facebook’s Oversight Board was a November 2018 Facebook post by Mark Zuckerberg ostensibly in response to the growing pressure on the company in the aftermath of Cambridge Analytica, the 2016 election interference revelations, and the social network’s role in aiding the spread of disinformation in Myanmar in the run-up to the Rohingya genocide. The Board will be operated by a Trust to which the company has made an irrevocable pledge of $130 million.

For now, cases will be limited to individual pieces of content that have already been taken down and can be referred in one of two ways: By Facebook itself or by individuals who have exhausted all appeals within its ecosystem (including Instagram). And while the geographical balance has been considered, for a platform that has approximately 2.5 billion monthly active users and removes nearly 12 billion pieces of content a quarter, it is hard to imagine the group being able to keep up with the barrage of cases it is likely to face.

There is also no guarantee that geographical diversity will translate to the genuine diversity required to deal with kind of nuanced cases that may come up. There is no commitment as to when the Board will also be able to look into instances where controversial content has been left online. Combined with the potential failings of its deepfakes policy to address malinformation, this will result in a tradeoff where harmful, misleading content will likely stay online.

Another area of concern is the requirement to have an account in the Facebook ecosystem to be able to refer a case. Whenever the board’s ambit expands beyond content takedown cases, this requirement will exclude individuals/groups, not on Facebook/Instagram from seeking recourse, even if they are impacted.

The elephant in the room is, of course, WhatsApp. With over 400 million users in India and support for end-to-end encryption, it is the main vehicle for information disorder operations in the country. The oft-repeated demands for weakening encryption and providing backdoors are not the solution either.

Information disorder, itself, is not new. Rumours, propaganda, and lies are as old as humanity itself and surveillance will not stop them. Social media platforms significantly increase the velocity at which this information flows thereby increasing the impact of information disorder significantly. Treating this solely as a problem for platforms to solve is equivalent to addressing a demand-side problem through exclusive supply-side measures. Until individuals start viewing new information with a healthy dose of skepticism and media organisations stop being incentivised to amplify information disorder there is little hope of addressing this issue in the short to medium term.

(Prateek Waghre is a research analyst at The Takshashila Institution)

Read More
High-Tech Geopolitics Anupam Manur High-Tech Geopolitics Anupam Manur

The folly of breaking up Big Tech

Further, breaking up these companies would significantly reduce the value consumers get due to the high interconnectedness of the products. A lot of the value that Google has seen in the Maps platform, for instance, comes from all the data that they have from Search. Customers also receive a lot of value from other Google products that are cross-subsidised from revenue earned in other products. YouTube, for instance, is widely believed to be non-profitable but is supported by revenues earned by other products.We would also have to stop and wonder how is it that one of the most integral parts of our lives — Google Search — is provided free of cost. Google can give the service for free because it can monetise it with advertising. If Google is broken up, this would no longer be possible. Breaking up any one of these services would give us substantially less valuable services.Breaking up these technology companies would also have a severe impact on innovation in the sector. As an article in Politico points out, “The top five spenders in research and development in 2017 were all tech companies. Amazon alone spent more than $22 billion. The development of autonomous vehicles, artificial intelligence and voice recognition wouldn’t be nearly as advanced as they are now if it weren’t for the work of Google and Amazon”. Investing in R&D and finally introducing them into the market is an expensive ordeal. However, big tech companies can afford to do so because of the nature of interconnectedness that exist within their products...Read the entire article 

Read More