Commentary
Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy
- Aarthi Ratnam
- Aditya Pareek
- Aditya Ramanathan
- Anand Arni
- Anupam Manur
- Arjun Gargeyas
- Ganesh Chakravarthi
- Harshit Kukreja
- Kajari Kamal
- Mahek Nankani
- Manoj Kewalramani
- Megha Pardhi
- Mihir Mahajan
- Nitin Pai
- Prakash Menon
- Pranav RS
- Pranay Kotasthane
- Prateek Waghre
- Priyal Lyncia D'Almeida
- Rohan Seth
- Ruturaj Gowaikar
- Sapni GK
- Sarthak Pradhan
- Shambhavi Naik
- Shrey Khanna
- Sridhar Krishna
- Yazad Jal
(Re)Defining Social Media as Digital Communication Networks
This article originally appeared in TheQuint with the headline 'We Need a Better Definition for Social Media To Solve Its Problems.' An excerpt is reproduced here.
The Need For a New Term
There’s nothing wrong with an evolving term, but it must be consistent and account for future-use cases. Does ‘social media platforms’ translate well to the currently buzz-wordy ‘metaverse’ use-case, which, with communication at its core, shares some of the fundamental characteristics identified earlier? Paradoxically, the term ‘social media platform’ is simultaneously evolving and stagnant, expansive yet limiting.This is one of the reasons my colleagues at The Takshashila Institution and I proposed the frame of 'Digital Communication Networks' (DCNs), which have three components — capability, operators and networks.Read More
Facebook Says It Inadvertently Restricted A Hashtag. Now It Needs To Tell Us Exactly How And Why
This article originally appeared on Medianama. An excerpt is reproduced here:
An explanation
The presence of a political context surrounding these cases also raises the question of how Facebook is responding to the possible weaponisation of its community reporting. We know from Facebook’s August 2020 CIB report that it took against a network engaged in mass reporting. What principles does it use to define thresholds for action? How is such coordinated activity that falls below its self-defined threshold of Coordinated Inauthentic Behaviour handled? Knowledge about the specifics of thresholds become essential when they make the difference between publicly disclosed and internal actions, as the Sophie Zhang – Guardian series demonstrated in the Indian context.Facebook — this applies to other networks too, but Facebook is by far the largest in India — needs to put forward more meaningful explanations in such cases. Ones that amount to more than ‘Oops!’ or ‘Look! We fixed it!’. There are, after all, no secret blocking rules stopping it from explaining its own mistakes. These explanations don’t have to be immediate. Issues can be complex, requiring detailed analysis. Set a definite timeline, and deliver. No doubt, this already happens for internal purposes. And then, actually show progress. Reduce the trust deficit, don’t feed it.This does raise concerns of being drawn into distracted by narrow content-specific conversations or being distracted by ‘transparency theater’, thereby missing the forest for the trees. These are legitimate risks and need to be navigated carefully. The micro-level focus can be about specific types of content or actions on a particular platform. At the macro-level, it is about impact on public discourse and society. They don’t have to be mutually exclusive and what we learn from one level should inform the others, in pursuit of greater accountability. To read more visit: Facebook says it inadvertently restricted a hashtag. Now it needs to tell us exactly how and why | MediaNama
It’s Not Just About 50 Tweets and One Platform
This article originally appeared in TheWire. An excerpt is reproduced here.Transparency and a voluntary actThis latest attempt came to light because Twitter disclosed this action in the Lumen Database, a project that houses voluntary submissions. And while Twitter is being criticised for complying, reports suggest that the company wasn’t the only one that received such a request. It just happened to be the only one that chose to disclose it proactively.Expanding on legal scholar Jack Balkin’s model for speech regulation, there are ‘3C’s’ available (cooperation, cooption and confrontation) for companies in their interaction with state power. Apart from Twitter’s seemingly short-lived dalliance with confrontation in February 2021, technology platforms have mostly chosen the cooperation and cooption options in India (in contrast to their posturing in the west).This is particularly evident in their reaction to the recent Intermediary Guidelines and Digital Media Ethics Code. We’ll ask for transparency, but what we’re likely to get is ‘transparency theatre’ – ranging from inscrutable reports, to a deluge of information which, as communications scholar Sun-ha Hong argues, ‘won’t save us’.Reports allege that the most recent Twitter posts were flagged because they were misleading. But, at the time of writing, it isn’t clear exactly which law(s) were allegedly violated. We can demand that social media platforms are more transparent, but the current legal regime dealing with ‘blocking’ (Section 69A of the IT ACT) place no such obligations on the government. On the contrary, as lawyers Gurshabad Grover and Torsha Sorkar point out, it enables them to issue ‘secret blocking’ orders. Civil society groups have advocated against these provisions, but the political class (whether in government or opposition) is yet to make any serious attempts to change the status quo.
Are Tech Platforms Doing Enough to Combat ‘Super Disinformers’?
This is an excerpt from an op-ed published on TheQuint.
The Repeat Super-Disinformer
The wrong way to regulate disinformation
This article originally appeared in Deccan Herald.
When the Kerala Governor signed a controversial Ordinance, now withdrawn, proposing amendments to the Kerala Police Act, there was understandably a significant amount of criticism and ire directed at the state government for a provision that warranted a three-year jail term for intentionally and falsely defaming a person or a group of people. After the backlash, the state’s Chief Minister announced his intention not to implement the fresh amendment.
How not to regulate information disorderFor anyone tracking the information ecosystem and how different levels of state administration are responding to information disorder (misinformation, disinformation and malinformation) this attempted overreach is not surprising. In Kerala alone, over the last few months, we have witnessed accusations from the opposition of ‘Trump-ian’ behaviour on the part of the state administration to decry any unflattering information as ‘fake news’. Even in September, the Chief Minister had to assure people that measures to curb information disorder will not affect media freedom, after pushback against decisions to expand fact-checking initiatives beyond Covid-19 related news. In October, it was reported that over 200 cases were filed for ‘fake news’ in the preceding five months.Of course, this is by no means limited to one state, or a particular part of the political spectrum. Across the country, there have been measures such as banning social media news platforms, notifications/warnings to WhatsApp admins, a PIL seeking Aadhaar linking to social media accounts, as well as recommendations to the Union Home Minister for ‘real-time social media monitoring’. Arrests/FIRs against journalists and private citizens for ‘fake news’ and ‘rumour-mongering’ have taken place in several states.How to regulate information disorder?Before proceeding to ‘the how’, it is important to consider two fundamental questions when it comes to the topic of regulating disinformation. First, should we? Four or five years ago, many people would have said no. Yet, today, many people will probably say yes. What will we say in the four or five years from now? We don’t know. ...For the complete article, go here.
Why we need to rethink how we disagree online
This article originally appeared in the Deccan Herald. An excerpt is reproduced here.Intentions as well as consequences are important in the information ecosystem. In July, an anonymous Twitter handle that purportedly offers ‘unpopular unapologetic truths’ distastefully advised its male followers to "only marry virgins". A quick Twitter search suggests that this wasn't the first time this account had engaged in such rhetoric, it wasn't the last either – but on this particular occasion it broke out from its regular set of followers to garner wider attention.Understandably, there was outrage. Some of the account's past content was called out, regular followers of the account were called out, both the tweets in question and the account were reported in unison by multiple users and more. However, two days later the account itself declared victory stating that interest in its content had increased and 'weak' followers had been cleared out.Earlier in the year, efforts by the campaign 'Stop Funding Hate' led to a movie streaming service, a business school and an ad-network excluding a far-right Indian website from their ad programs. However, the website itself claimed an increase in voluntary contributions 'upto 700 per cent' and also stated that there was no drop in advertising revenues.And in an ongoing instance, in late August, a news anchor tweeted out a ‘teaser’ video of an upcoming series that claimed it would unearth a conspiracy enabling minorities to occupy a disproportionate number of civil services posts in the country. An indicative analysis, using the tool Hoaxy, seemed to show that a lot of the initial engagement came from tweets that were meant to call out the nature of the content via quote tweets.Often, many of these accounts had a large number of followers themselves.Around the same time, an analysis by Kate Starbird, an eminent crisis informatics researcher, showed a misleading tweet by Donald Trump spreading “much farther” through quote tweets than through retweets. She also pointed out that a lot of the early quote tweets were critical in nature and calling on the platform to take action.While the matter of this particular series itself is sub judice, let’s focus on the days just after the tweet in question. In four days, the anchor’s follower count had grown nearly five per cent. In the ensuing period there have also been multiple hashtag campaigns professing their support both for the anchor and channel.What is common in each of these situations is that efforts to call out problematic content may have inadvertently benefitted the content creators by galvanising their supporters (in-group), propagating the content on digital platforms (algorithmic reward) and perhaps even recruiting new supporters who were inclined to agree with the content but are only choosing to participate as a result of the amplification and/or perceived attacks against their points-of-view or beliefs (disagreement with the out-group).
Read more.
Drilling Into Indian Twitter's Interest in Sweden's Violent Riots
This article appeared in TheWire.Over last weekend, hashtags such as SwedenRiots, NorwayRiots and WeAreWithSweden were trending on Twitter in India.This may have seemed surprising, but if anyone spent a few minutes looking at the content that was being shared, it became painfully obvious why this was happening.A cursory Twitter search showed that many accounts which typically share India-centric majoritarian content were actively participating in the conversation, with plenty of local references to recent violence in Delhi and Bengaluru.A more extensive examination using a combination of tools like Hoaxy, Twitter’s Advanced Search feature and APIs allowed us to dig in a little bit further.
In parallel, let’s also keep in mind the framework of Dangerous Speech, which can be defined as ‘any form of expression (e.g. speech, text, or images) that can increase the risk that its audience will condone or commit violence against members of another group.’
First, using Hoaxy, I attempted to get a sense of network clusters for these hashtags. Now, Hoaxy samples tweets, so its results should be considered indicative.The size of the circles and density of clusters around it represent the amount of engagement (retweets, mentions, quote) an account receives.The names that the visualisations throw up help corroborate, to an extent, what the Twitter search suggested – that a lot of the activity on these hashtags was being driven by accounts that associate themselves with pro-Hindutva narratives.Where did they come from?Next, I used Twitter’s API to capture around 60,000 tweets across these 3 hashtags on August 29 and 30. Approximately 30,000 unique accounts shared content using at least one of these 3 hashtags. It is not uncommon for accounts not to include location data. For these hashtags, typically 40% did not (For context: I’ve typically seen this number vary between 45 and 60%).In the doughnut charts below – which plot the number of tweets by location for the remaining accounts – the predominance of Indian locations is visible.

Location of accounts for SwedenRiots

Location of accounts for WeAreWithSweden
Are we seeing the beginnings of an ‘Indian internet’?
Almost a month ago, the Ministry of Information Technology took the unprecedented step of banning 59 apps/services on the purported grounds that these services were prejudicial to the sovereignty and integrity of India. At the time it was unclear what a ban entailed and how it would be implemented and/or enforced.
However, the subsequent weeks between companies voluntarily suspending their services, Apple and Google de-listing them from their respective app stores and telecom service providers being ordered to block these apps, the ban has been 'technically' enforced from the perspective of an average user that may not want to navigate the world of Virtual Private Networks (VPNs) and TOR. So, for now, it appears that we have the answer to the second question.
Reports now suggest that 47 more apps could be facing a ban with another 275 being monitored closely.
Forests, trees and branches of the internet
In the context of the stand-off between India and China, these moves have and will be portrayed as a strong response to China. As Alex Stamos (former CSO at Facebook) of Stanford's Internet Observatory illustrates there are several overlapping considerations - many of these are applicable to India too.
Thus, as far as the future of the internet in India (and even the world) goes, these developments cannot be viewed in isolation. And must be looked at in combination with recent events in India, its stated position on cyber sovereignty as well as global trends.
After the Indian Ban on 59 Chinese Apps, What Comes Next?
As the clock ticked towards 9 PM on the night of June 29, the talk of Internet in India was the Ministry of Electronics and IT’s press release indicating that 59 apps would be banned. The stated reason for this ban was that they were engaged in activities prejudicial to the sovereignty and integrity of India.The common thread among these apps is that they are of ‘Chinese origin’ even though that isn’t explicitly mentioned in the government’s banning order.How it could be implementedFor now, let’s set aside the question of whether the ban is a fitting response to the killing of 20 Indian soldiers in Galwan, Ladakh on June 15, and whether it is justified or not. What is likely to happen from here is that the Google Play Store and iOS App Store will be asked to de-list the apps from their Indian storefronts. There is precedent for this when a ban on TikTok was ordered by the Madras High Court.Read more
Will India experience the fallout of Trump vs Twitter?
This is an extract from the full article which appeared in Deccan Herald.....But before resorting to isomorphic mimicry, it is important to understand what the executive order proposes. The reading suggests that it seeks to narrow the definition of 'good faith' under which a platform can carry out 'Good Samaritan' blocking. Kate Klonick was quoted in Recode as saying that the order was not enforceable and even referred to it is as 'political theatre'. And Daphne Keller published an annotated version of the order in which she classified various sections as 'atmospherics', 'legally dubious', and points on which 'reasonable minds can differ.The current trajectory in India appears to be headed in the opposite direction. A recent PIL in the Supreme Court, filed by a BJP member sought to make it mandatory to link social media accounts with identification. While the petition itself was disposed of, the petitioner was directed to be impleaded in the ongoing Whatsapp Traceability case. The draft Personal Data Protections proposes 'voluntary' verification for social media intermediaries.
What 300 Days of Internet Winter in Kashmir Tell Us About Erecting a Digital Wall
This is an extract from the full article published on The Wire.....What is the cost of this protracted disruption?There is no shortage of real-life stories about the economic impact this prolonged Internet disruption has had in the union territory. Media reports are replete with such examples.Given that we are still in the midst of these events, an academic exercise to estimate the economic costs has not been published.Still, using available numbers regarding internet subscribers (38% from TRAI for the service area of Jammu and Kashmir) and a rough estimate of time connected drawn out from reports on patterns of internet usage by people in India (different sources peg the ‘active consumption’ time between 90 minutes and 150 minutes. Let’s use the higher end of that range. Note that there is no measure of passive consumption impacted), it is possible to arrive at a back-of-the-envelope ‘estimate’ of how many hours of Internet access have potentially been disrupted since August 4, 2019.
Between August 4 and January 14, when there was a complete shutdown, this number amounted to ~1.9 billion hours. In the period from January 14 to March 4, when there was whitelisted access another ~600 million hours were added. And the 87 days between then and May 30, will have accounted for another ~1 billion hours. That adds up to around 3.5 billion hours of disrupted internet access for approximately 12.25 million people. Let that sink in.
Why govt must address the question of access inequity before making mobile apps mandatory during COVID-19
....
The Issues
Several concerns have been raised about the implications on multiple fronts. Privacy, and the risk of its evolution into a vehicle for mass surveillance. Security, and the potential information security risks to individual users as well as a large centralised database of citizen data. Legal - whether the National Disaster Management Act confers the necessary powers to do so, as well as the absence of Data Protection legislation. Technological - efficacy of contact tracing apps/algorithmic risk detection and the associated issues with false positives and false negatives. Transparency - opaque processes and the fact that the code has not been open-source yet. Some reports suggest that this may happen when the app is considered to be 'stable'. It is unclear, though, how stability is defined.The Ada Lovelace Institute has published a rapid review titled "Exit through the App Store" which warns of the risks of 'rushed deployment of technological solutions without credible supporting evidence and independent oversight'.
On Equity
An aspect which is under explored is Equity, or the lack of it. In designing public policy, Equity is a crucial part of policy design. It deals with the social allocation of benefits and deals with the questions of 'who pays' and 'who benefits'. In the book 'Policy Paradox', Deborah Stone lists 3 dimensions and 8 issues and associated dilemmas with each distribution method. Ultimately, this is a complex undertaking and no matter what criteria is for distribution, some group or the other will feel that they have been left-behind by the policy.Read more
As Chorus of 'Chinese Virus' Rings Loudly in India, Is the Stage Set For an Info-Ops Tussle?
This article was originally published on The WireUsers of Indian Twitter, for want of a better term, will not have been able to escape the term ‘Chinese virus’ trending on the platform in the form of different hashtags over the last 10 days.What seemingly started off as agitprop by the American right has transcended boundaries and resonated in India as well, echoing sentiment that Beijing and the Chinese should be severely penalised for the COVID-19 pandemic.This sentiment was backed by what appeared to be some coordinated activity on Twitter from March 24 onward, around the time of India’s lockdown, all with the purpose of taking aim at China.#ChineseVirus19, #ChineseBioterrorisn, #Chinaliedpeopledied and #ChineseVirusCorona were some of the hashtags being used in favour of this narrative around March 24 and March 25.Read more
Public sharing of home quarantine addresses a bad idea
This article originally appeared in Deccan Herald
On March 24, several WhatsApp groups catering to apartment associations started buzzing with excel files containing addresses of those who were placed under home quarantine. The source was a website run by the Government of Karnataka which contained details for all districts in Karnataka (deleted ‘purportedly’). This author was eventually able to access the website which contained approximately 30 files. It is unclear whose decision it was to make these details public. Statements from government officials indicate this was a deliberate step. However, it seems to be at odds with how sensitively matters are reportedly being handled by teams on the ground, who were informing nearby residents as needed.
Why is this a bad idea?
On March 14, a leading English daily misreported a story that the spouse of a patient who had tested positive for COVID-19 had skipped quarantine and traveled to another Indian city. There were several calls for exemplary punishment, but it later turned out that the person in question had not violated quarantine instructions at the time of travelling. Sure, certain questionable decisions were made subsequently. But we need to be aware that these are unprecedented times and no one is really prepared to deal with the situation. The fear and self-preservation instinct is apparent. But there is also a danger of uncontrolled reactions by the general public in such a scenario.
Over the last few days we have also seen disturbing reports of airline crew and healthcare professionals facing a backlash at their respective places of residence. Videos have also emerged showing people physically abusing fellow citizens for coughing in public and not wearing masks. Regrettably citizens from the North-East have been subject to racial abuse.
This is why it is ill-advised to publicly share this kind of information. While individual names and phone numbers have not been shared, an address is enough to enable targeting (changed slightly). In information security terms, it can be considered a form of doxing people (publicly posting personal information). The individuals living at those addresses have been put at risk of being on the receiving end of discriminatory and abusive behaviour. While some of them may have violated their quarantine instructions, treating all of them as potential criminals is not an acceptable response (changed slightly).
Unwittingly aiding the flow of information
Another important aspect to consider is the role of unaffected individuals in circulating this information. The Bengaluru version of the list was doing the rounds on WhatsApp since the evening of March 24. And it continued to be circulated by people even if they disagreed with the practice or could not vouch for its authenticity. As expected, the link to the website eventually made its way onto Twitter and was shared by users with a large number of followers. Others shared it with the intention of being helpful and sharing information. Unfortunately, in such a situation, these actions only aided the virality of the information.
There is also a tendency to believe that since the information is already out there, individual sharing actions do not matter. However, when the information in question can put someone else at risk, we must consider the downstream implications of that individual action too.
What is the right way to react?
Understanding how to react to minimise the risk to others in such situations is important. Although it is tempting to share such information with acquaintances or Tweet about specifics while disagreeing with the action, it is necessary to consider if the unintended consequence of the action.
If the intention is to raise awareness about the lack of sensitivity, then the act itself can be highlighted without sharing the location/source of such information. It must be remembered that this action can have the second order effect of nudging others to look for it.
Another possible course of action is to reach out directly to the authorities who have made this information public. This may not be possible in all situations but can be an effective strategy. It should be noted that their actions or decisions are not always taken with bad intentions. Those responsible may react positively to such interventions if the risks are clearly highlighted to them.
Why is sharing-hygiene important?
This sharing hygiene is especially important as we see more [information disorder] flooding our lives. The large platforms where this information proliferates are attempting to take measures to tackle this but such content moderation at scale is impossible to do well. It is as much a demand-side problem as it is a supply-side problem. Passively sharing information may have more consequences than we realise. We have a collective role to play in curbing information disorder.
Hotstar blocked John Oliver show even before Modi govt could ask. It’s a dangerous new trend
This article was originally published in ThePrint. Censorship in response to moral panic and outrage was the norm, but now in India, we’re even cutting out the middlemen.
When riots were taking place in northeast Delhi and US President Donald Trump was set to land in India, HBO’s popular Sunday night show Last Week Tonight hosted by John Oliver aired an episode on Prime Minister Narendra Modi. This episode, however, did not appear on Hotstar’s listings for the show, which is normally updated Tuesday mornings in India (it has still not been added at the time of writing this). International publications like Time magazine and The Economist have been the subject of outrage for carrying stories critical of PM Modi in the past. Netflix, too, has faced criticism for producing and housing shows like Leila and Sacred Games. Perhaps, the desire to avoid facing similar public anger prompted Disney-owned Star India to take this step.
It is important to look at the implications of this intervention.All the world’s an outrageA moral panic is a situation where the public fear and level of intervention (typically by the state) are disproportionate to the objective threat posed by a person/group/event.In India, one of the most infamous cases of a technology company bowing to moral panic occurred in January 2017. The Narendra Modi government threatened Amazon with the revocation of visas when it became aware that the online retailer’s Canadian website listed doormats that bore the likeness of the Indian flag on them. It was fitting that this threat was issued on Twitter by then External Affairs Minister Sushma Swaraj since the social networking platform was also the place where the anger built-up. It should come as no surprise that Amazon acquiesced, even though it was bound by no law to do so. While such depictions of national symbols are punishable under Indian law, it is debatable whether it should apply to the Canadian website of an American company, not intended for India-based users.
This wasn’t the first instance of sensitivities being enforced extra-territorially on internet companies and certainly won’t be the last. And this is very much a global phenomenon. While the decision by the Chinese state channel CCTV and several other companies to effectively boycott the NBA team Houston Rockets and the censorship of content supporting Hong Kong protests by Blizzard Entertainment garnered worldwide attention, these were only the latest in a long list of companies that have had to apologise to China and ‘correct’ themselves for issues like listing/depicting Taiwan as a separate country or quoting Dalai Lama on social media websites that were not even accessible in the country.
In Saudi Arabia, Netflix had to remove an episode of Hasan Minhaj’s Patriot Act that was heavily critical of Crown Prince Mohammed bin Salman. In the United States as well, content delivery network Cloudflare has twice stopped offering services to websites (Daily Stormer in 2017 and 8Chan in 2019) when faced with heavy criticism because of the nature of the content on them. In both cases, CEO Matthew Prince expressed his dismay at the fact that a service provider had the ability to make this decision.Of Streisand and censorshipThe key difference in the current scenario is that Hotstar appears to have made a proactive intervention. There was no mass outrage or moral panic that it was forced to respond to. By choosing not the make this John Oliver episode available on its platform, it effectively cut out the middlemen and skipped to the censorship step. A move that was ultimately self-defeating since the main segment of the episode is available in India through YouTube anyway and has already garnered more than 60 lakh views while the app was subjected to one-star ratings on Google’s Play Store.The attempt has only drawn more attention to both the episode and the company itself. This is commonly known as the Streisand Effect. Although a more cynical assessment could be that this step has earned Star India some ‘brownie points’ from the Modi government.Earlier this month, The Internet and Mobile Association of India (IAMAI) announced a new ‘Self-Regulation for Online Curated Content Providers’ with four signatories (Hotstar, Jio, SonyLiv, and Voot). Notably, an earlier version of the code released in February 2019 had additional signatories that chose to opt-out of this version. It was also reported that some of the underlying causes for discontent were lack of transparency, due process, and limited scope of consultations in the lead-up to the new code.Some of the broad changes in the new code include widening the criteria for restricted content from disrespecting national symbols to the sovereignty and integrity of India. It also empowered the body responsible for grievance redressal to impose financial penalties. In addition, signatories of the code and the grievance redressal body are obliged to receive any complaints forwarded/filed by the government.A letter by Internet Freedom Foundation to Justice A.P. Shah cited as concerns, the code’s consideration of the reduction of liability over creativity and the risk of industry capture by large media houses. The pre-emptive action taken in the case of Last Week Tonight’s Modi episode perfectly encapsulates the risks of such a self-regulatory regime. It signals both intents and potentially the establishment of processes to readily restrict content deemed inimical to corporate interests. Such self-censorship, once operationalised, is a slippery slope and can result in much more censorship down the road.The general trend of responding to outrage by falling in line was problematic in itself. But in India’s current context, the eagerness to self-censor is significantly more harmful especially when you consider that other forms of mass media are already beholden to a paternalistic state with severely weakened institutions.The author is a research analyst at The Takshashila Institution, an independent centre for research and education in public policy. Views are personal.
Tackling Information Disorder, the malaise of our times
This article was originally published in Deccan Herald.
The term ‘fake news’ – popularised by a certain world leader – is today used as a catch-all term for any situation in which there is a perceived or genuine falsification of facts irrespective of the intent. But the term itself lacks the nuance to differentiate between the many kinds of information operations that are common, especially on the internet.
Broadly, these can be categorized as disinformation (false content propagated with the intent to cause harm), misinformation (false content propagated without the knowledge that it is false/misleading or the intention to cause harm), and malinformation (genuine content shared with a false context and an intention to harm). Collectively, this trinity is referred to as ‘information disorder’.
Over the last 4 weeks, Facebook and Twitter have made some important announcements regarding their content moderation strategies. In January, Facebook said it was banning ‘deepfakes (videos in which a person is artificially inserted by an algorithm based on photos) on its platform. It also released additional plans for its proposed ‘Oversight Board’, which it sees as a ‘Supreme Court’ for content moderation disputes. Meanwhile, in early February, Twitter announced its new policy for dealing with manipulated media. But the question really is whether these solutions can address the problem.
Custodians of the internet
Before dissecting the finer aspects of these policies to see if they could work, it is important to unequivocally state that content moderation is hard. The conversation typically veers towards extremes: Platforms are seen to be either too lenient with harmful content or too eager when it comes to censoring ‘free expression’. The job at hand involves striking a difficult balance and it’s important to acknowledge there will always be tradeoffs.
Yet, as Tarleton Gillespie says in Custodians of the Internet, moderation is the very essence of what platforms offer. This is based on the twin-pillars of personalisation and the ‘safe harbour’ that they enjoy. The former implies that they will always tailor content for an individual user and the latter essentially grants them the discretion to choose whether a piece of content can stay up on the platform or not, without legal ramifications (except in a narrow set of special circumstances like child sex abuse material, court-orders, etc.) This of course reveals the concept of a ‘neutral’ platform for what it is, a myth. Which is why it is important to look at these policies with as critical an eye as possible.
Deepfakes and Synthetic/Manipulated Media
Let’s look at Facebook’s decision to ban ‘deepfakes’ using algorithmic detection. The move is laudable, however, this will not address the lightly edited videos that also plague the platform. Additionally, disinformation agents have modified their modus operandi to use malinformation since it is much harder to detect by algorithms. This form of information disorder is also very common in India.
Twitter’s policy goes further and aims to label/obfuscate not only deepfakes but any synthetic/manipulated media after March 5. It will also highlight and notify users that they are sharing information that has been debunked by fact-checkers. In theory, this sounds promising but determining context across geographies with varying norms will be challenging. Twitter should consider opening up flagged tweets to researchers.
The ‘Supreme Court’ of content moderation
The genesis of Facebook’s Oversight Board was a November 2018 Facebook post by Mark Zuckerberg ostensibly in response to the growing pressure on the company in the aftermath of Cambridge Analytica, the 2016 election interference revelations, and the social network’s role in aiding the spread of disinformation in Myanmar in the run-up to the Rohingya genocide. The Board will be operated by a Trust to which the company has made an irrevocable pledge of $130 million.
For now, cases will be limited to individual pieces of content that have already been taken down and can be referred in one of two ways: By Facebook itself or by individuals who have exhausted all appeals within its ecosystem (including Instagram). And while the geographical balance has been considered, for a platform that has approximately 2.5 billion monthly active users and removes nearly 12 billion pieces of content a quarter, it is hard to imagine the group being able to keep up with the barrage of cases it is likely to face.
There is also no guarantee that geographical diversity will translate to the genuine diversity required to deal with kind of nuanced cases that may come up. There is no commitment as to when the Board will also be able to look into instances where controversial content has been left online. Combined with the potential failings of its deepfakes policy to address malinformation, this will result in a tradeoff where harmful, misleading content will likely stay online.
Another area of concern is the requirement to have an account in the Facebook ecosystem to be able to refer a case. Whenever the board’s ambit expands beyond content takedown cases, this requirement will exclude individuals/groups, not on Facebook/Instagram from seeking recourse, even if they are impacted.
The elephant in the room is, of course, WhatsApp. With over 400 million users in India and support for end-to-end encryption, it is the main vehicle for information disorder operations in the country. The oft-repeated demands for weakening encryption and providing backdoors are not the solution either.
Information disorder, itself, is not new. Rumours, propaganda, and lies are as old as humanity itself and surveillance will not stop them. Social media platforms significantly increase the velocity at which this information flows thereby increasing the impact of information disorder significantly. Treating this solely as a problem for platforms to solve is equivalent to addressing a demand-side problem through exclusive supply-side measures. Until individuals start viewing new information with a healthy dose of skepticism and media organisations stop being incentivised to amplify information disorder there is little hope of addressing this issue in the short to medium term.
(Prateek Waghre is a research analyst at The Takshashila Institution)
Budget and Cybersecurity, a missed opportunity
This article originally appeared in Deccan Chronicle.In the lead-up to the 2020 Budget, the industry looked forward to two major announcements with respect to cybersecurity. First, the allocation of a specific ‘cyber security budget’ to protect the country’s critical infrastructure and support skill development. In 2019, even Rear Admiral Mohit Gupta (head of the Defence Cyber Agency) had called for 10% of the government’s IT spend to be put towards cyber security. Second, a focus on cyber security awareness programmes was seen as being critical especially considering the continued push for ‘Digital India’.On 1st February, in a budget speech that lasted over 150 minutes, the finance minister made 2 references to ‘cyber’. Once in the context of cyber forensics to propose the establishment of a National Police University and a National Forensic Science University. Second, cyber security was cited as a potential frontier that Quantum technology would open up. This was a step-up from the last two budget speeches (July 2019 and February 2019) both of which made no references to the term ‘cyber’ in any form. In fact, the last time cyber was used in a budget speech was in February 2018, in the context of cyber-physical weapons. When combined with other recent developments such as National Security Council Secretariat’s (NSCS) call for inputs a National Cyber Security Strategy (NCSS), the inauguration of a National Cyber Forensics Lab in New Delhi, and an acknowledgement by Lt Gen Rajesh Pant (National Cyber Security Coordinator) that ‘India is the most attacked in cyber sphere’ are signals that the government does indeed consider cyber security an important area.While the proposal to establish a National Forensic Science University is welcome, it will do little to meaningfully address the skill shortage problem. The Cyber Security Strategy of 2013 had envisioned the creation of 500,000 jobs over a 5-year period. A report by Xpheno estimated that there are 67,000 open cyber security positions in the country. Globally, Cybersecurity Ventures estimates, there will be 3.5 million unfilled cyber security positions by 2021. 2 million of these are expected to be in the Asia Pacific region.It is unfair to expect this gap to be fulfilled by state action alone, yet, the budget represents a missed opportunity to nudge industry and academia to fulfilling this demand at a time when unemployment is a major concern. The oft-reported instances of cyber or cyber-enabled fraud that one sees practically every day in the newspaper clearly point to a low-level of awareness and cyber-hygiene among citizens. Allocation of additional funds for Meity’s Cyber Swachhta Kendra at the Union Budget would have sent a strong signal of intent towards addressing the problem.Prateek Waghre is a research analyst at The Takshashila Institution, an independent centre for research and education in public policy.
Analysis of whitelisted URLs in Jammu and Kashmir
This post was originally published on MedianamaBy Rohini Lakshané and Prateek WaghreThe Supreme Court gave a judgement on January 10, 2020, directing the Central government to review the total suspension of Internet services in Jammu and Kashmir imposed since August 5, 2019, and to restore essential services. In response, the government of Jammu and Kashmir issued a whitelist comprising 153 entries on January 18 and increased the number of entries to 301 on January 24. What would the experience of an ordinary resident of Jammu and Kashmir be like under the whitelist arrangement? We conducted a preliminary analysis to empirically determine whether the 301 whitelisted websites and services would be practically usable and found that only 126 were usable to some degree.Before we delve further into the analysis, it is pertinent to understand the background and context in which an ordinary resident of Jammu and Kashmir may access the Internet. India has experienced the highest number of intentional Internet shutdowns across the world since 2012. . Kashmir has been facing the longest intentional Internet shutdown ever recorded in a democratic country. Voice and SMS functionality, without Internet connectivity, was reactivated on postpaid mobile connections in Jammu and Kashmir on October 14, 2019. People in the Kashmir valley can access the Internet only through the 844 kiosks run by the government.
- 2G Internet connectivity would be reinstated on postpaid mobile connections in 10 districts of Jammu Division and 2 of Kashmir Division.
- “The internet speed shall be restricted to 2G only.”
- 400 additional Internet kiosks are to be installed in Kashmir.
- Social media websites, peer-to-peer (P2P) communication apps, and Virtual Private Networks (VPNs) services have been explicitly prohibited.
- ISPs are to provide wired broadband to companies engaged in “Software (IT/ ITES) Services”.
- For wired connections, Paragraph II of the order dated January 24 states, “For fixed-line Internet connectivity: Internet connectivity shall be [made] available only after Mac-binding.”
- Voice and SMS functionality would be restored on prepaid mobile connections across all districts of Jammu and Kashmir.
- For providing internet access on locally-registered pre-paid mobile connections, telecom service providers or “TSPs shall initiate a process of verification of credentials of these subscribers as per the norms applicable for postpaid connections”.
- “The ISPs shall be responsible for ensuring that access is allowed to whitelisted sites only.”
- The order dated January 14 states that it “may be subject to further revision” after which the department would conduct “a review of the adverse impact, if any, of this relaxation on the security situation.” According to the order released on January 24, “the law enforcement agencies have reported no adverse impact so far. However, they have expressed apprehension of misuse of terror activities and incitement of general public…”
- “Whitelisting of sites shall be a continuous process,” which could be interpreted to mean that the government would periodically update the list.
Thus, an ordinary internet user in Jammu and Kashmir accessing the Internet under this whitelist arrangement would be doing so via 2G mobile connections or Internet kiosks placed inside government offices.
Questions raised by a selection of entries in the whitelist
- In the orders dated January 14 and 18, the Government of Jammu and Kashmir cites the use of the Internet for the following activities as some of the reasons for implementing the total Internet blackout in Kashmir: “terrorism/terror activities”, activities of “anti-national elements”, “rumour-mongering”, “spread of propoganda/ ideologies”, “targeted messaging to propagate terrorism”, “fallacious proxy wars”, “causing disaffection and discontent” among people, and the “spread of fake news”. In light of this explanation, what were the process and criteria applied to select these specific URLs/ services/ websites to be on the whitelist?
- What were the process and criteria, if any, to reject websites and services that are similar to those whitelisted and those that provide the same or comparable services? For example, some travel aggregator websites (MakeMyTrip, Goibibo, Cleartrip, Trivago, Yatra, etc) have been included but not others (Agoda, Expedia, Kayak, Hotels.com). Online shopping/e-commerce websites Flipkart, Amazon, Myntra, and Jabong feature in the whitelist but not Snapdeal, Ebay, and others.
- How were the residents of Jammu and Kashmir informed about this whitelist, that these specific services/ websites had become accessible? News websites and social media websites are still blocked. The orders will appear in an issue of the gazette, which is just one source of information and not accessible by everybody.
- In view of all the above questions, how do the authorised government officers “ensure implementation of these directions in letter and spirit”, as stated in paragraph 7 of the order dated January 14?
Role of Internet Service Providers (ISPs)
The whitelist and its accompanying orders raise some concerns about ISPs’ implementation of the whitelist.
- In the case of the entries that contain neither URLs nor qualifying information about including subdomains or about permitting mobile applications, it should not be left to the discretion of an Internet Service Provider (ISP) to determine the appropriate URLs or the appropriate mode of access (mobile or desktop application, mobile or desktop version) of a whitelisted service or website. ISPs are intermediaries and are not authorised to take a judgement call on the orders they receive from the government. Moreover, the whitelist orders explicitly state that the onus of ensuring that sites outside the whitelist remain inaccessible is on the ISPs (“The ISPs shall be responsible for ensuring that access is allowed to whitelisted sites only.”)
- In the case of invalid or indeterminate URLs, how are whitelisted entries to be implemented? What are the options for an ISP to seek clarifications about these from the government?
- ISPs have been directed to provide wired broadband to companies in Jammu and Kashmir engaged in “Software (IT/ ITES) Services”. In view of the fact that the terms IT (information technology) and ITES (information technology-enabled services) cover a broad range of commercial activities, how is this directive going to be operationalised?
- In a recently published paper analysing how ISPs in India block websites, researchers at the Centre for Internet and Society (CIS) found that ISPs and governments were not willing to disclose the URLs that were blocked. The study also found that less than 30% of blocked URLs were common across the ISPs included in the study, and different ISPs used different techniques to implement blocklists. This is indicative of arbitrary action on the part of individual ISPs. It is also likely that Internet users have limited recourse owing to the lack of transparency in censoring websites. When combined with the need for ISPs to exercise their own discretion/ judgement in implementing these orders (as argued in 1), there is plenty of potential for inconsistent enforcement by ISPs.
- It is unclear how ISPs will actually implement this whitelist. If the filtering is done at the DNS layer, then the number of practically unusable websites will likely be higher than what we encountered, since the DNS resolution process itself is likely to be broken for any website that returns anything other than an A record/ IP Address.
Findings and Analysis
1. Entries with no URL
1. Media service providers/streaming services: There are 7 streaming services on the list: Amazon Prime, Netflix, Sony Liv, Zee 5, Hotstar, Voot, and Airtel TV. They support viewing on desktop browsers and mobile apps. This may be a reason why the whitelist only states their names and not the corresponding URLs. Assuming that these services are enabled for use on both desktop and mobile applications, they will still be practically unusable because:
- Only 2G speeds are currently permitted in Jammu and Kashmir. 2G speeds are too slow for streaming audio-video and multimedia content.
- Streamed content is delivered over CDN (content delivery network) URLs, none of which are present on the current whitelist.
2. JioChat: JioChat is an iOS and Android instant messaging app that supports voice and video calling. It is the only service on this whitelist that supports these functionalities. It is unlikely that this app would be practically usable for video/voice calls because 2G speeds are too slow for it.
2. Government-owned eTLDs
The whitelist includes three entries for government-owned eTLDs (effective top-level domains, also known as “public suffixes”): “Gov.in”, “Nic.in”, and “Ac.in”. The entries do not contain URLs or qualifying information about including subdomains. It should be explicitly stated if ISPs are expected to allow gov.in, nic.in, ac.in, and all their subdomains. For example, gov.in houses four levels of subdomains. Currently, it is unclear how ISPs will interpret and implement this since the entries in the whitelist do not contain adequate information. The directory of Indian government websites is available at http://goidirectory.nic.in.
3. Banking and Finance Services
Log-in pages are on domains or subdomains different from those listed in the whitelist, which is why these services are not practically useful regardless of whether the actual whitelisted URL is accessible/usable. For example,
- The website of ICICI Bank https://www.icicibank.com is whitelisted. However, the URL to log-in to personal banking at ICICI is on a subdomain of the website, https://infinity.icicibank.com, which is not whitelisted. So, individuals with an account at ICICI Bank, will not be able to access their accounts online.
- While https://www.hdfc.com has been whitelisted, HDFC Bank’s personal banking services are on a different domain, https://www.hdfcbank.com, which will also remain inaccessible.
VPNs and proxy services are prohibited, so an ordinary user would be unable to circumvent restrictions imposed by the whitelist.Of the 15 websites categorised under “Banking” in the whitelist, only 2 (www.jkbankonline.comand www.westernunion.com) had accessible log-in pages/sections and all 15 had at least one identifiable issue when they were accessed with the whitelist restrictions in place.
4. CDN, Sub-Domains, and Third-Party Content
The State of the Web maintained by http Archive indicates that the median number of requests on a webpage for mobile devices is approximately 70. These requests are spread across subdomains of the website, domains owned by content delivery networks (CDNs) such as akamaized.net, cloudfront.net, cloudflare.net, etc., and third-party domains such as Google Analytics, tag managers, real user monitoring tools, advertisers, and so on. The whitelist approach interferes with these requests and more often than not, results in an adverse impact on the functioning of the website itself. In our analysis, we observed that this affected websites to varying degrees:
- Minimal visible impact
- Some images don’t load
- All images don’t load
- Critical functions become unresponsive, such as search in the case of some OTAs (online travel agents)
- The entire layout scheme breaks
Example 1: Consider www.amazon.in. The request map shows that a significant number of requests are made to domains other than www.amazon.in. Since these requests will be blocked, the website will barely function for the user accessing behind the whitelist. This is evident from the screenshot of the landing page.

Request map for www.amazon.in

Screenshot of www.amazon.in
Example 2: In the case of the website of the Indian Railways, www.irctc.co.in, once again, the request map indicates a large number of requests to other domains. This results in breaking the layout of the page (as is evident in the screenshot), as well as the operation of the website.

Request map for www.irctc.co.in

Screenshot of www.irctc.co.in
Example 3: The website of the Public Works Department of the Government of Jammu and Kashmir, www.jkpwdrb.nic.in, sends no requests to other domains as indicated by the request map and thus the whitelist restrictions have no visible impact. It should be noted that this kind of website setup is uncommon.

Request map for www.jkpwdrb.nic.in

Screenshot of www.jkpwdrb.nic.in
5. Search Engines
The updated list in the January 24 order contains 10 hostnames classified as search engines and www.bing.com classified under utilities.
- The whitelist did not include Indian subdomains (google.co.in, in.search.yahoo.com) which means that users may not be able to access them, whether they type it manually or get redirected to the Indian domain of the search engine based on language or browser settings.
- The list included Canadian and UK subdomains for Google. It also included the Canadian and French-Canadian versions of Yahoo Search. There was also no justification provided for the exclusion of Indian locales while including non-Indian locales.
- We also found that while conducting a search was possible, a user could only successfully navigate to results from websites that were on the whitelist (subject to how they worked as determined by our testing). For websites not on the whitelist, the information contained in the snippets was readable on the search results page, but not beyond it.
So we have categorised search engines as ‘partially usable’.
6. News/Technology Updates
The updated list in the January 24 order also contains 74 websites categorised as “ews” (60) and “Technology Updates” (14).
- There was a mix of regional, national and international websites.
- Audio/podcast and video content for all of these sites were either delivered from subdomains/CDN domains or YouTube and hence did not work.
- International publications such as The Washington Post, Wall Street Journal, and The New York Times allow limited views before enforcing a paywall. However, their sign-in pages were not accessible. In such cases, even if the websites were minimally visually affected, they were categorised as ‘practically not usable’.
- For the remaining, we observed that the impact to page layout varied in degrees:
- All pages and UI elements were broken.
- Only the Home page was broken.
- Only subsection pages were affected.
- Only article pages were not affected.
The categorisation between usable, partially usable, and not usable was done on the basis of how easy or difficult it was to consume content and navigate within each website.

Screenshot indicating broken page layout
7. Additional Observations
- Mail: The whitelist included 4 webmail services. However, none were usable since the sign-in pages required navigating to domains that were not on the whitelist. They have been categorised as ‘practically not usable’.
- Entertainment: The updated list from the January 24 order also included 7 entertainment sites along with URLs which made testing them possible (this in contrast to the 6 listed in the January 18 order that did not include URLs and only named the services). Only one (https://wynk.in) of these was able to stream content successfully. It was categorised as ‘practically usable’ even though it may be difficult to stream content on a 2G network. 6 out of 7 have been categorised as ‘practically not usable’. It should be noted that such content is typically consumed on apps that were not tested as a part of this exercise. Apps generally use different hostnames to request resources.
- Official websites of apps: The whitelist includes Gingerlabs.com, the official website for the note-taking mobile app Notability. Another entry, Kinemaster.com is the official website of the eponymous video-editing app for Android and iOS. The website enables users to get user support and interact with the community of users. For the purpose of this analysis, the websites were tested and categorised as per their usability. It should be noted that new downloads would not be possible since the Apple App Store and Google Play Store are not included in the whitelist. It is also unclear if users who already have these apps installed will be able to use them since the apps may not use the same domain(s) to make requests.
- URLs that contain paths: Two URLs on the whitelist contain specific paths (www.marutisuzuki.com/MarutiSuzuki/Car and https://www.heromotocorp.com/en-in/). It is unclear how ISPs could whitelist these two entries without whitelisting the domains Marutisuzuki.com and Heromotocorp.com.
Summary of Findings
Number of entries in the whitelist | 301 | |
Number of duplicate entries | 13 | |
Number of invalid URLs | 4 | |
Number of entries with no specified URL and no qualifying information about the website/service | 8 | |
Number of inconclusive/indeterminate entries | 6 | |
Number of URLs after validation and de-duplication | 270 | |
Number of websites that are practically usable | 58 | Most of these websites are largely comprised of textual information. |
Number of websites that are practically partially usable | 68 | Some important features are adversely affected. |
Total number of websites usable to some degree | 126 | |
Number of URLs in the list (no protocol or http) that default to https | 94 out of 270 | These may not work in actual use cases because of the redirect to https. |
Usability by ‘Field’ | Practically Usable? | |||
Field (as specified in the whitelist) | Could Not Test | No | Partially | Yes |
Automobiles | 1 | 1 | 1 | 1 |
Banking | 8 | 7 | ||
Education | 25 | 14 | 7 | |
Employment | 1 | 1 | 1 | |
Entertainment | 7 | 8 | 1 | 2 |
1 | 3 | |||
News | 6 | 18 | 17 | 19 |
NGOs | 1 | 4 | ||
Search Engines | 1 | 4 | 5 | |
Services | 4 | 5 | 1 | 3 |
Technology Updates | 8 | 4 | 2 | |
Travel | 3 | 13 | 1 | 3 |
Utilities | 8 | 49 | 15 | 15 |
Weather | 1 | |||
Web Service | 1 | 1 | ||
Total | 31 | 144 | 68 | 58 |
*The detailed results from testing all entries in the first version of the whitelist as recorded on January 22 and 23, IST is available here. We updated the set of results on 26 January to reflect the next version of the whitelist, available here. This version carries over all entries of the previous one unchanged.
Method
Testing URLs on an Unrestricted Internet Connection
To test if all entries in the list were functioning, we first accessed them using an India IP address on an unrestricted 4G connection. The ones that were not functional were categorised as:
- Invalid URL: 4 URLs are invalid. One (www.hajcommitee.gov.in) contains a typographical error. 3 others are badly formed (https://www.google.com > Gmail; https://oppo-in; www.google.com > chrome [sic]).
- Duplicate URL: 13 URLs were found to be duplicates of other entries. 3 URLs are present on the list along with their respective redirected versions. For instance, www.trivago.com redirects to https://www.trivago.in, both of which are present on the whitelist. We excluded the former from our analysis and considered the redirected version. The other two instances are Airtel.in and Cleartrip.com.
- Entries with no URL specified: We have excluded 8 entries that are names of services and not URLs. 7 of these are media services providers such as Netflix and Amazon Prime.
- Inconclusive entry/indeterminate URL: 6 URLs returned an error message and were excluded. 3 of those — Gov.in, Nic.in and Ac.in ⸺ did not include a protocol (http:// or https:// or the www. prefix). The DNS registration for Gov.in and Nic.in had also expired as indicated by WHOIS at the time of writing this analysis.
The results have been logged and categorised according to this schema in the detailed analysis (available here):
Is the URL accessible? | This column logs the results of a preliminary check for URLs that lead to error messages, such as broken links and websites/ webpages that are misconfigured. The results are categorised as:Yes: The URL is accessible invalid URL; Duplicate URL; No URL specified; Inconclusive entry/ Indeterminate URL: The URL or whitelist entry is not accessible for reasons described above. |
Does the URL redirect to another? | The column indicates whether a URL redirects to another URL by default. Categorised as: Yes/ No |
Redirects to | This column specifies the redirect target URL if it exists. Categorised as:No redirect https: The initial URL on the whitelist either contains http or no protocol is specified. It redirects by default to its https version, with the rest of the URL being identical.For example, www.moneycontrol.com on the whitelist redirects by default to https://www.moneycontrol.com.<URL>: The initial URL on the whitelist redirects by default to a URL with a different path or prefix. In such cases, the redirect target URL is specified here.For example, https://www.icicidirect.com redirects to https://www.icicidirect.com/idirectcontent/Home/Home.aspx |
Remark/Observation | Observations based on the testing so far. |
Whitelist Testing
The 270 URLs that remained were put through whitelist testing via a Chrome browser extension called Whitelist Manager, via a 10 Mbps connection. This extension can be configured to restrict users from accessing any URLs except whitelisted ones.The results have been logged and categorised according to this schema (available here):
Page Layout | This column logs how the page appears visually to the viewer. Classified as either Intact or Broken.
|
Images loading? | Categorised as Yes/ No/ Partial.
|
Has sign-in? | This column logs whether the website provides its users with an option to sign-in for its services or for personalised content. Categorised as Yes/No. |
Sign-in section visible? | This records if the sign-in page accessible or the sign-in section on the website is functional with the whitelist restrictions in place.
Note: The actual sign-in process was not tested for every website. There is potential for additional website failures if this relies on calls to non-whitelisted domains. |
Other functions affected? | A subjective assessment of whether other parts of the website were impacted by the whitelisting restrictions. If any were found, these were listed in the ‘Specify’ column. This assessment should be considered indicative and not exhaustive. |
Practically usable? | A subjective assessment of whether the website could still be used or not.
|
Limitations of Our Method
- We tested the whitelisted entries for usability via a whitelist management extension for the Chrome browser. Results may differ if another whitelist management software were used on a different browser. However, the difference will not be large and significant enough to change our final assessment of whether the website was usable or not.
- We conducted the tests on a 10 Mbps connection. We did not use the bandwidth throttling feature on Chrome since the primary intent was to determine whether the sites were accessible or not. In the actual use case, people will visit the whitelisted entries via 2G connections with which the websites that we were able to access may not be reachable in a reasonable amount of time.
- We did not sign-in to any of the websites, try to write and send an email, carry out a financial transaction or upload a document such as a tax filing. Doing these activities may significantly alter the final assessment regarding their usability.
- 94 URLs (http or no protocol specified) redirect by default on an unrestricted connection to their https version. We have thus tested the https versions only. This was done due to a limitation of the Chrome browser extension we used for the testing. (Refer to Column E entitled “Does the URL redirect to another?” in the spreadsheet containing detailed analysis.) However, these 94 URLs may not function in the actual use case in Kashmir depending on the ISPs’ implementation of the whitelist.
- We focused on visual elements and usability only. We ignored the impact on analytics, monitoring tools as long as it did not impact the ability of an end-user to navigate the website. This is, however, bound to be a matter of concern for website operators.
*Rohini Lakshané is a researcher and technologist. She is Director (Emerging Research), The Bachchao Project.Prateek Waghre is a Research Analyst at The Takshashila Institution, a centre of education and research in public policy.
Will India follow Russian example on domestic internet?
After Russia tested RuNet, what are the chances that India will try its hand at NayaBharatNet?
In the final weeks of the last year, there were reports that Russia successfully tested RuNet, its ‘domestic internet’ that would be cut off from the global internet. Specifics of the exercise are not known – whether for example, it was really successful and what challenges it faced – but it made for an ominous end to a decade that has been marked by a growing disillusionment with the concept of the internet as a liberating force.
This was always on the cards when Russia and China started working together in the lead-up to the former’s Yarovaya law, which imposed geographical restrictions on the transfer of Russian users’ data. In December 2019, Russia had also passed a law making it mandatory for devices sold in the country to be embedded with Russian apps from July 2020. While it does not specify which devices and apps are covered, critics of the law are concerned that its vague nature opens the door for it to be misused to force the installation of spyware.
Russia is not alone in this quest though, China is the pioneer, and others like North Korea and Iran are along for the ride as well. After a week-long nationwide internet shutdown in response to protests and an exercise by government officials to collate critical ‘foreign’ websites sparking speculation about the creation of a ‘whitelist’ of allowed sites, Iran’s National Intranet Network (NIN) is once again in the spotlight. This was followed by a statement from President Rouhani that the network was being strengthened so that people will not need foreign networks to meet their needs. North Korea too has a tightly controlled domestic internet, Kwangmyong, whose content is largely controlled by the state.
China’s great firewall (GFW) has been around for over a decade and is not a unitary system as it is often made out to be. It uses a combination of manual and automated techniques to block global content but largely works on the principle of blacklisting unwanted websites/content. Many international websites do work but are extremely slow because of the scanning and filtering that inbound internet traffic to the country is put through. For a website to operate from inside Mainland China, a number of local permits are required depending on the industry. Much of the internet backbone is state-controlled. It has continued to tighten the noose through a combination of restrictive regulation and stricter interpretation of existing rules.
A highly restrictive Cybersecurity law passed in 2015 called for mandatory source code disclosures. In 2016, working with ISPs it set out specifications for an Information Security Management System that aimed to automate the ability of provincial authorities to monitor/filter internet traffic. In 2017, it tweaked licensing rules to ensure that permits would only be issued to domains that are registered to a Mainland China-based company. The extent to which these rules are enforced may vary, but it leaves a ‘Sword of Damocles’ in the state’s toolkit that it can drop whenever it chooses to do so. By constantly increasing the costs of doing business for non-Chinese companies, it has achieved ‘chinternet’ without explicitly cutting the cord – yet.
Fears of a ‘splinternet’ along national boundaries or ‘balkanisation’ of the internet are not new. But the likelihood is now higher than ever before as governments try to take control over cyberspace after ceding space in its early years. Research by the Oxford Internet Institute and Freedom House which have revealed the use of disinformation campaigns and the co-option of social media for manipulation and surveillance by various governments. The United Nations General Assembly passed a resolution in support of a Russia-backed Open-Ended Working Group (OEWG) which has drawn criticism from others on the ground that it prioritises cyber sovereignty and domestic control of the internet over human rights. Countries that advocate a free-and-open internet are in a bind over whether to participate in the group or cede control in the global norm-setting process. Continued passage of regulation by various countries that have extraterritorial application will fragment the internet and strengthen the constituency favouring cyber sovereignty.
‘NayaBharatNet’ a possibility?
India has yet to articulate its position on some of the divisive issues concerning global norms in cyberspace, yet it has repeatedly stressed the principle of cyber sovereignty positioning it alongside the Sino-Russian camp. While it seems to have softened its position on data localisation, for now, similar rhetoric about national sovereignty and security has been used by Russia and China in the past.
Authoritarianism by the Indian state is also surely on the rise – events that unfolded in 2019 provide ample empirical evidence for this. The fact that various police departments are proactively taking to social media channels to threaten/deter posts that run contrary to the state’s narrative (Is this confirmed about police depts?) and the frequent use of internet shutdowns show that the desire to control the internet is extremely strong. International criticism has repeatedly been portrayed as mischief by a ‘foreign hand’. The creation of a strictly regulated domestic digital echo chamber is not unimaginable in this context. In fact, it is a logical next step as the current tactics are bound to have diminishing returns over time.
Today, the economy (political or otherwise) for such a move does not exist. The IT Industry
would obviously vigorously oppose it. And unlike China, the telecommunication backbone infrastructure is not state-owned, but the sector as a whole is probably the weakest it has ever been and tending towards a monopoly/duopoly. It also has a history of being regulated with a heavy hand.
Until now, India has followed a policy of denying cyber intrusions or claiming that no significant harm was done. However, in the aftermath of ‘undeniable’ real-world harm inflicted by a cyber attack, the Overton window could move towards supporting such an initiative for national security and could very well be exploited. Sometime in the not-so-distant future, we could all be communicating using Kimbho on NayaBharatNet.
(Prateek Waghre is a research analyst at The Takshashila Institution)
This article was originally published in Deccan Herald.
Look at the numbers: Why Digital India can’t afford internet shutdowns with slowing economy
Take a look at these numbers – 3, 5, 6, 14, 31, 79, 134, 91. These are the numbers of documented instances of internet shutdowns in India between 2012 and 2019. The 2019 number will certainly rise during the final weeks of the year as anger against the Citizenship (Amendment) Act and the Bharatiya Janata Party rises.
And yet, as internet shutdowns are reported in Meerut, Aligarh, Malda, Howrah, Assam, Nagaland, one wonders if Narendra Modi government really thinks it can help assuage anger and old resentments.
World over, protesters have always found a way out of any clampdown. In Hong Kong, protesters are using Bridgefy, a service that relies on Bluetooth, to organise.And yet, all governments, irrespective of whether it is the Congress or the BJP or any other party, keep using internet shutdowns as a kill switch. But tech stops for no one. It’s time India thinks beyond shutdowns.
A new era
In almost all cases, mobile internet services were shutdown. For four of the last five years, more than half of these shutdowns have been ‘proactive’ in nature. They have been imposed based either on Section 144 of the CrPC or The Temporary Suspension of Telecom Rules issued by the Ministry of Communications under the NDA government in 2017. While an appeal against the use of the former was struck down by the Supreme Court in 2016, the latter suffers from a lack of transparency and was passed without any consultation with citizens, who are directly affected. Through RTI requests it has also been revealed that many instances of internet shutdowns go undocumented and due process is not always followed.
The willingness and urgency on display to snap communication lines is worrying, especially in ‘Digital India’. Considering that 97 per cent of the estimated 570 million internet users use at least a mobile device access to access the internet, and the growing reliance on connectivity for communication and commerce, this is a severely disproportionate measure. Various studies have pegged the cost of these disruptions from 0.4-2 per cent of a country’s daily GDP to $3 billion for India over a 5-year period ending in 2017.
Since 2017, India has witnessed nearly twice as many shutdowns. Even so, until mid-2019, internet shutdowns predominantly affected parts of Rajasthan and Jammu and Kashmir, both accounting for nearly 250 instances. More importantly, they were rarely imposed in urban centres. In August 2019, a new era began unfolding. First the ongoing internet shutdown in the region of Jammu and Kashmir is the widest sustained disruption ever documented. Second, on the day of the Supreme Court Ayodhya verdict, proactive internet shutdowns were in operation in Aligarh, Agra and Jaipur, signalling a shift in the willingness to deploy them in urban centres. And finally, with ongoing protests against the Citizenship (Amendment) Act, reports have been coming in about internet disruptions in Assam, Tripura, multiple districts in West Bengal, Aligarh and Meerut in Uttar Pradesh, cementing the use of internet shutdowns as the tool of choice.
Diminishing returns
The framework of Radically Networked Societies (RNS) can be used to understand the interplay between protesters and the state. An RNS is defined as a web of connected individuals possessing an identity (real or imagined) and having a common immediate cause. The internet as a medium provided them the ability to scale faster and wider than ever before.With measures like internet shutdowns and curfews, the state aims to increase the time it takes for them to mobilise by restricting information flows. However, such methods are bound to have diminishing returns over time.Snapping communication lines will do little to quell genuine resentment and may conversely encourage people to take to the streets and violate curfews, thereby increasing chances of escalation. Mesh networking apps that operate without internet connectivity will eventually make their way into the toolkit of Indian protesters, like they did in the Hong Kong protests, rendering the argument of shutdowns as an ‘online curfew’ moot.
Better than shutdowns
The Indian State must evolve beyond the use of internet shutdowns. Instead, it should look to address the causes and reduce the time it takes to counter mobilise. There have been some instances of state authorities trying different approaches.In September 2016, when there were protests in Bengaluru over the Cauvery water sharing judgment, instead of shutting down the internet the Bengaluru Police took to Twitter to dispel misinformation and rumours proactively. In the days leading up to the Ayodhya verdict, several police departments were proactively monitoring social media for objectionable messages. While this did not function smoothly on the day of the verdict since the police went on an excessive case registering spree, the Bengaluru example shows that it can work. Future capacity building and training cyber personnel to specifically counter flows of misinformation online must be a consideration going forward.The reaction to viral hoax messages circulating before the Ayodhya verdict warning of surveillance also produced some interesting insight. While more surveillance is never the answer, alternate ways of promoting responsible behaviour should be explored. This could range from encouraging fact-checking of information to political leaders leading by example and not encouraging abusive trolls, misinformation flows themselves. Conflict and polarisation as engagement must be actively discouraged.
Another important step is to counter dangerous speech in society. Research has shown that misinformation/disinformation does not only circulate during specific events. Conditions that exacerbate such flows already exist in society. While the state alone cannot do this, it must nudge the people towards countering it. Such measures must be articulated in the upcoming National Cybersecurity Policy.
Ultimately, that the world’s largest democracy is by far the world leader of such disproportionate tactics should be reason enough for the Indian state to rethink the use of internet shutdowns. But if that doesn’t suffice, the realisation that they come with an expiry date should spur it into fixing the underlying problems unless it wants to live with the diminishing returns that incentivise escalation.The author is a Research Analyst at The Takshashila Institution’s Technology and Policy Programme. Views are personal.This article originally appeared in ThePrint.in