Commentary

Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy

Why missed call democracy is a bad idea

The Narendra Modi-led government launched a ‘missed call campaign’ on January 3, 2019, asking people to give a missed call at a number to register their support for the controversial Citizenship (Amendment) Act. Home Minister Amit Shah has claimed that 52,72,000 missed calls have been received from verifiable phone numbers.

What has been happening in the background since the launch of the campaign is a reflection of the state of affairs in the country. Ever since the campaign started, Twitter has been abuzz with misleading tweets asking people to call the number by promising ‘job offers’, ‘free Netflix subscription’, ‘romantic dates with women in the area’, and so forth. Tweets such as ‘Akele ho? Mujhse dosti karoge?’ (Feeling lonely? Want to be friends?) by a Twitter account with 16k followers, Prime Minister Modi being one amongst them, point to a much larger misinformation campaign presumably by the IT-cell of the ruling party. A counter-campaign was also launched soliciting missed calls to demonstrate opposition to CAA and NRC.

Where’s my number?

In the age of surveillance capitalism, any entity, especially the government, running a campaign to garner support using phone numbers opens up private individuals to grave risks. The people who are calling the toll-free number have no information on whether their numbers would be stored in a database, shared with third parties, and/or used for a future campaign by the government. First-principles of privacy dictates that data collected should be proportionate to the legitimate aim and limited purpose that is being pursued. Furthermore, the data principal should provide informed consent to the collection of data.

There seem to be no means for citizens to determine if the government is storing their data, and no process to get their records deleted if they wish to. Repurposing the potential database to micro-target during election campaigns is a severe threat that emerges from this exercise. People who called the number are either staunch supporters of the Bharatiya Janata Party (BJP) or vulnerable youth who fell into the honeytrap while looking for jobs, subscription TV, or romantic partners. Given that the government now potentially has access to members of its core voter base as well as gullible people at the margins, it can push information and opinions that favour its ideology. Alternatively, participants in the counter-campaign can be categorised as anti-establishment voices. This narrative dominance, empowered by personalisation algorithms, can result in the formation of filter bubbles where people are isolated from conflicting viewpoints, reinforcing their existing beliefs.

The design of the missed call campaign itself is flawed. An honestly designed campaign would have provided options to vote either for or against an option. The absence of a way to express an opposing view reduces it to an exercise in confirmation bias. The missed call mechanism is also susceptible to manipulation. It is unclear whether these are features or bugs. While 52 lakh may seem like a sizable number, it is a drop in the ocean in a country of more than 130 crore people. In fact, the number is less than 3 per cent of the total BJP membership of 18 crore people.

Why referendums fail

If this approach to engage with citizens is legitimised, it opens the door to use it every time there is a risk of backlash over a government decision. Even before Brexit became the poster-child for failed referendums, political theorists had advised against them. When asked about the best time to use referendums, Michael Marsh, a political scientist at Trinity College, Dublin was quoted as saying ‘almost never’.

In Democracy for Realists, political scientists Christopher Achen and Larry Bartels, lament the idea that the ‘only possible cure for the ills of democracy is more democracy. They cite a body of research that concludes that citizens often do not have the necessary knowledge, nor the inclination to acquire it when it comes to voting on nuanced issues. Decisions are often made on short-term considerations like personal tax saving or reduction in government expenditure without an analysis of anticipated unintended consequences. Additionally, there is a tendency for referendum processes to be captured by certain interest groups and typically decided in favour of whichever has deeper pockets. Low-effort voting methods, such as online voting and missed calls, are likely to be overused. This will result in desensitisation of the public, exacerbating all the shortcomings of referendums.

The use of missed calls to vindicate its stand on contentious issues, by a democratically elected government, is not only ineffectual, but it also exposes unsuspecting individuals to severe risks. Employing systems without basic privacy considerations, clear purpose limitations, and straightforward redressal mechanisms, can lead to misuse in the future and undermine the democratic ethos of the nation.

(Utkarsh Narain and Prateek Waghre are research analysts at The Takshashila Institution)
This article was originally published in Deccan Herald
Read More
High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

Will India follow Russian example on domestic internet?

After Russia tested RuNet, what are the chances that India will try its hand at NayaBharatNet?

In the final weeks of the last year, there were reports that Russia successfully tested RuNet, its ‘domestic internet’ that would be cut off from the global internet. Specifics of the exercise are not known – whether for example, it was really successful and what challenges it faced – but it made for an ominous end to a decade that has been marked by a growing disillusionment with the concept of the internet as a liberating force.

This was always on the cards when Russia and China started working together in the lead-up to the former’s Yarovaya law, which imposed geographical restrictions on the transfer of Russian users’ data. In December 2019, Russia had also passed a law making it mandatory for devices sold in the country to be embedded with Russian apps from July 2020. While it does not specify which devices and apps are covered, critics of the law are concerned that its vague nature opens the door for it to be misused to force the installation of spyware.

Russia is not alone in this quest though, China is the pioneer, and others like North Korea and Iran are along for the ride as well. After a week-long nationwide internet shutdown in response to protests and an exercise by government officials to collate critical ‘foreign’ websites sparking speculation about the creation of a ‘whitelist’ of allowed sites, Iran’s National Intranet Network (NIN) is once again in the spotlight. This was followed by a statement from President Rouhani that the network was being strengthened so that people will not need foreign networks to meet their needs. North Korea too has a tightly controlled domestic internet, Kwangmyong, whose content is largely controlled by the state.

China’s great firewall (GFW) has been around for over a decade and is not a unitary system as it is often made out to be. It uses a combination of manual and automated techniques to block global content but largely works on the principle of blacklisting unwanted websites/content. Many international websites do work but are extremely slow because of the scanning and filtering that inbound internet traffic to the country is put through. For a website to operate from inside Mainland China, a number of local permits are required depending on the industry. Much of the internet backbone is state-controlled. It has continued to tighten the noose through a combination of restrictive regulation and stricter interpretation of existing rules.

A highly restrictive Cybersecurity law passed in 2015 called for mandatory source code disclosures. In 2016, working with ISPs it set out specifications for an Information Security Management System that aimed to automate the ability of provincial authorities to monitor/filter internet traffic. In 2017, it tweaked licensing rules to ensure that permits would only be issued to domains that are registered to a Mainland China-based company. The extent to which these rules are enforced may vary, but it leaves a ‘Sword of Damocles’ in the state’s toolkit that it can drop whenever it chooses to do so. By constantly increasing the costs of doing business for non-Chinese companies, it has achieved ‘chinternet’ without explicitly cutting the cord – yet.

Fears of a ‘splinternet’ along national boundaries or ‘balkanisation’ of the internet are not new. But the likelihood is now higher than ever before as governments try to take control over cyberspace after ceding space in its early years. Research by the Oxford Internet Institute and Freedom House which have revealed the use of disinformation campaigns and the co-option of social media for manipulation and surveillance by various governments. The United Nations General Assembly passed a resolution in support of a Russia-backed Open-Ended Working Group (OEWG) which has drawn criticism from others on the ground that it prioritises cyber sovereignty and domestic control of the internet over human rights. Countries that advocate a free-and-open internet are in a bind over whether to participate in the group or cede control in the global norm-setting process. Continued passage of regulation by various countries that have extraterritorial application will fragment the internet and strengthen the constituency favouring cyber sovereignty.

NayaBharatNet’ a possibility?

India has yet to articulate its position on some of the divisive issues concerning global norms in cyberspace, yet it has repeatedly stressed the principle of cyber sovereignty positioning it alongside the Sino-Russian camp. While it seems to have softened its position on data localisation, for now, similar rhetoric about national sovereignty and security has been used by Russia and China in the past.

Authoritarianism by the Indian state is also surely on the rise – events that unfolded in 2019 provide ample empirical evidence for this. The fact that various police departments are proactively taking to social media channels to threaten/deter posts that run contrary to the state’s narrative (Is this confirmed about police depts?) and the frequent use of internet shutdowns show that the desire to control the internet is extremely strong. International criticism has repeatedly been portrayed as mischief by a ‘foreign hand’. The creation of a strictly regulated domestic digital echo chamber is not unimaginable in this context. In fact, it is a logical next step as the current tactics are bound to have diminishing returns over time.

Today, the economy (political or otherwise) for such a move does not exist. The IT Industry

would obviously vigorously oppose it. And unlike China, the telecommunication backbone infrastructure is not state-owned, but the sector as a whole is probably the weakest it has ever been and tending towards a monopoly/duopoly. It also has a history of being regulated with a heavy hand.

Until now, India has followed a policy of denying cyber intrusions or claiming that no significant harm was done. However, in the aftermath of ‘undeniable’ real-world harm inflicted by a cyber attack, the Overton window could move towards supporting such an initiative for national security and could very well be exploited. Sometime in the not-so-distant future, we could all be communicating using Kimbho on NayaBharatNet.

(Prateek Waghre is a research analyst at The Takshashila Institution)

This article was originally published in Deccan Herald.

Read More

Disney Should Buy Spotify

You may think that winning the streaming race depends on having the best content, but things have already begun to change. As of now, the company with the better bundle will win, and that’s why it makes sense for Disney to buy Spotify this year.To read the full article, visit OZY.Rohan is a technology policy analyst at The Takshashila Institution.  

Read More
High-Tech Geopolitics Manoj Kewalramani High-Tech Geopolitics Manoj Kewalramani

Your Fitbit is Going to Replace Clinics near You

First, it was payments and now it’s healthcare. Big Tech in the US and China is revolutionising the health sector, with hundreds of billions of dollars of market share at stake. There are multiple factors that are driving this movement. For starters, there’s the simple need to find new avenues for growth for both American and Chinese tech giants, and there are only so many trillion-dollar industries to disrupt to add shareholder value. China has more reasons and more at stake here. Both countries boast of high levels of internet penetration and smartphone use. Both the US and China are rapidly aging societies. This implies a growing geriatric healthcare burden and creates incentives for new alternatives to overcrowded hospitals. Both are home to a wealthy middle class, which is seeking better health solutions. According to Royal Philips’ Future Health Index 2019, both the US and China are global frontrunners in terms of adoption of digital health technology, with a large number of medical professionals and consumers relying on tools for self-monitoring and online consultations. This is a key contributor to their rise in demand for wearables. This is supported by and fuels their dynamic and thriving innovation ecosystems. This explains why American and Chinese companies are making moves in healthcare based on their core competencies. Recently, Amazon backed on its software to move into telemedicine and also invited healthcare companies to build tools on Alexa’s platform. Amazon’s core competence, however, is its efficiency in distribution networks. So the e-commerce giant acquired Pillpack, an online pharmacy. The Alibaba Group, on the other hand, entered the healthcare game early with its TMall Pharmacy in 2015. However, in 2018, Alibaba consolidated its healthcare assets, including medical devices, e-appointments, drug purchases, and delivery services under the banner of Alibaba Health, which leverages the group’s advantages in data processing and e-commerce. Another big Chinese player in the field is Tencent, which owns WeDoctor, one of the world’s biggest health tech start-ups. Google is great at data analytics and OS development. Keep that in mind and Project Nightingale begins to make sense. As does Google’s $2.1 billion acquisition of Fitbit. Google’s Chinese search counterpart Baidu has bounced back after a 2016 controversy over healthcare ads to explore the possibility of leveraging artificial intelligence and blockchain technology for its medical data sharing and distribution solution. Meanwhile, Apple excels in devices that track wellness. Think Apple Watch and the electrocardiogram that comes installed on it. Or the dedicated carekit and researchkit open-source frameworks that Apple has been pushing recently for developers. IDC data for 2018 show that while Apple is the market leader in the wearables segment, Chinese firms Xiaomi and Huawei take the second and third spots, respectively. Their global ranking is buttressed by their dominance in the Chinese and Indian markets. So what does the future of the health tech sector look like? We predict three scenarios that we believe will play out over the next five years: First, wearables will become the new OPDs: With Big Tech investing in healthcare across Silicon Valley, Zhongguancun, and Shenzhen wearables and telemedicine have a bright present and future in their diagnostic capabilities. Recording pulse or temperature, scanning bones or tissues, diagnosing based on those, and getting medicines have become or are becoming tasks that can be worked upon remotely or be delivered to you. Over the coming decade, wearables will reliably send accurate data in real-time to process for millions of people. This would give them a decisive advantage over the number of people physical OPDs can carter to, making the latter obsolete.  Second, tech giants will dominate health & life insurance: Wearables and smartphones are becoming increasingly sophisticated in diagnostic capabilities and tracking. As that continues to happen with every new iteration of FitBits and the Apple Watches, the OS becomes a platform for companies to sell services and gain revenue. WatchOS and WearOS (and/or what future FitBit OS is going to be called), are likely to go on to sell insurance through their devices. Whether Google/Apple curate a new insurance policy or end up acquiring an insurance company to do it for them is irrelevant. Considering that insurance is a lucrative market, and that data from the apps in the OS gives Google/Apple a comparative advantage means that it is the matter of when, not if, for both tech giants to start peddling their own insurance through the OS on smartphones or wearables. Third, Sino-US rivalry will stymie health tech’s future growth: The deepening strategic rivalry between the US and China has already shifted from competition over trade policies to a battle for technological supremacy. This is playing out in the form of expanding the definition of sensitive technologies that must be protected, tighter security reviews of Chinese tech investments, undoing of completed acquisitions, blacklisting of certain firms, export restrictions and a contest for foreign markets and data streams. Much of this is captured in the geopolitically charged discourse over Huawei and 5G. The health tech industry can expect a similarly rocky future. Collaboration between research communities and business entities across the Pacific will be difficult. Acquisitions in foreign markets are likely to become a politically polarising decision. Capital flows into each other’s health tech ecosystems will become increasingly constrained. Data will become the biggest sticking point, with most states preferring some form of localization.

Read More

Can Modi govt know who you text? Should FB be liable for your posts? We’ll know in Jan 2020

Apart from deciding on end-to-end encryption for chats, the amended IT Rules will also decide on what content belongs on the internet.

Should Facebook be liable for the content you post? Should Apple build a backdoor to allow access to iPhones? Should the government know who you are texting and should it have access to your messages? On 15 January 2020, the amendments to India’s IT Rules will answer these questions by finalising the intermediary guidelines.That is also one of the reasons why over the course of 2019, we have talked about whether the government of India should be allowed to break end-to-end encryption. Of course, the topic gained traction after the November Pegasus WhatsApp hack reports. And the Narendra Modi government said the law allows it to intercept and monitor digital content in the public interest.The problem with this whole encryption debate is that it takes up a disproportionate amount of mind space. Don’t get me wrong; encryption is a vitally important issue. However, it is not the only issue that will be covered by the IT amendments.The January amendments will also decide on these crucial issues.Also, we will use the words intermediary and platform interchangeably. But for context, a platform is an online service like Facebook or Twitter, while intermediary includes platforms, the servers they are hosted on, and even the cybercafé you might access the platform through.How many users before a company needs an office in India?According to the proposed amendments, any intermediary with over 50 lakh users will need to:

  1. Have a permanent registered office in India
  2. Appoint a nodal point of contact for the government
  3. Be included in the Companies Act

This may read fine at first glance. But take another look. Users as a term is vague. Monthly active users? Daily active users? Registered users? You might have an account on Pocket, but never end up using it. Does that mean Pocket now needs to have an office in India and appoint a person in charge of talking to the government on the off chance that 50 lakh people one day decide to use the app?The other thing here is how does the government keep track of the number of intermediaries who have included a nodal point of contact? Apps do not notify the government before they are made available to the people. Instead, they show up on the App Store/Play Store, ready to be used. And how would the government even know when an intermediary has crossed 50 lakh users? Should all intermediaries make their user stats public or release a notification when they meet the threshold?Clearly, these guidelines were drafted just keeping in mind Facebook and WhatsApp. However, they will have anticipated but unintended consequences as far as smaller firms are concerned.


Also read: The rise of Pegasus and why India should know the problem with hiring ‘internet mercenaries’

What content belongs on the internet?

The intermediary guidelines also talk at length about content takedowns and what should and should not be allowed to remain on the internet. You could say that the Modi government has written itself a blank cheque in being able to dictate this. Here are just some of the grounds on which companies may be asked to remove content:

  • In violation of decency and morality
  • Public order
  • Impacts the sovereignty and integrity of India
  • Security of state
  • Friendly relations with the foreign states
  • In relation to contempt of court
  • Defamation or incitement to offence
  • Defamatory
  • Obscene
  • Pornographic
  • Paedophilic
  • Hateful
  • Harassing
  • Blasphemous

A lot of these make sense. We as a society have a consensus that child porn, hate crimes, and videos against animal cruelty do not belong on the internet. The government also has every right to argue that content that impacts its security and relations with other states should be taken down. But look at some of the other grounds. Who decides what content is defamatory or blasphemous? For instance, comedy at the expense of someone or something can end up disparaging the subject. Does that mean comedy does not belong on the internet? You could argue a similar case for memes, documentaries, and blogs. Based on these grounds, anything that the government of the day doesn’t like can be taken down.Should we have a best-efforts approach to aiding law enforcement?Remember the anticipated but unintended consequences? Well, not all intermediaries have the same access to user data. A cloud service provider does not have the same power as a multi-million user platform. So, when law enforcement goes asking for information, they also take into account the asymmetries that exist within the ecosystem.A best-efforts approach will make sure that requests do not make cloud service providers or even cybercafés liable for sharing data they don’t have access to. Because, if at the end of the day, a request is not technically feasible, all it does is ensure that the matter will be taken to court to place undue stress on the intermediary.As for whether or not the government should break encryption, I’d strongly recommend against it. Internet shutdowns are bad enough. Imagine if we lived in a world where the government could learn about who you text and what you may be talking about. Recently, American WeChat users were banned for celebrating the Hong Kong election results. Similar instances could end up happening in India and at scale, that could end up being a threat to democracy unlike any we have seen before. To that end, watch out for the guidelines on 15 January, they could set the tone for the rest of the year.Rohan Seth is a Policy Analyst with the Technology and Policy Programme of The Takshashila Institution. Views are personal.This article was first published in The Print.

Read More

PLA SSF: Why China will be ahead of everyone in future cyber, space or information warfare

People’s Liberation Army Strategic Support Force contingent made its debut appearance at China’s military day parade, earlier this year. Formed on this day in 2015, it is mandated to create synergies between China’s space, cyber and electronic warfare. The PLA considers these three domains critical for “commanding strategic heights.” The SSF was formed to optimise China’s dominance in these three domains and also contribute to enhancing the PLA’s broader goals of strategic deterrence and integration for information warfare. Read more...

Read More

India’s National Cybersecurity Policy Must Acknowledge Modern Realities

Earlier this year, it was discovered that India was the target of two cyberattacks in the same month. The malware attacks at the Kundankulam Nuclear Power Plant and the Indian Space Research Organization (ISRO) are believed to be the outcomes of phishing attempts on employees. In 2018, it was reported that an officer of the Indian Air Force was sharing sensitive information on Facebook with two women who had honey-trapped him. None of these incidents are known to have resulted in severe harm, but the possibility that they could have is reason enough for India to cultivate and shape international discussions on cyberspace.As is the case with both international terrorism and protection of the environment, cooperation is a prerequisite to deal with cyberthreats given their borderless nature. India’s National Cyber Security Policy (2013) did not assign much weight to this aspect and defined no measurable outcomes against which progress could be judged. With its upcoming National CyberSecurity Policy (2020-2025), India has the opportunity to align its domestic policy with its global aspirations.Warfare in Cyberspace Is UniqueCyberspace is an amalgamation of the virtual with the physical. Actions in the virtual realm can affect the physical domain. With low barriers to entry, cyberspace provides attractive options for the launch of attacks and allows actors to achieve strategic outcomes both within and outside of the information domain. From crumbling critical infrastructure to designing a smart misinformation campaign that can influence democratic processes, the spectrum of outcomes that cyberattacks can achieve is broad. The Stuxnet malware, a U.S.-Israel joint operation to target Iran’s nuclear enrichment plant in Natanz, displayed the capabilities of a highly sophisticated and targeted cyber-offensive operation. Operations against Ukraine’s power grid in 2015, misinformation campaigns targeting U.S. presidential elections in 2016, and the WannaCry and NotPetya ransomware outbreaks in 2017 all showed the potential for real-world impact and collateral damage.There are two features that distinguish these attacks from conventional ones. First, cyberattacks are hardly predictable. Accurately determining an incoming attack is at present not possible. Second, as long as there is plausible deniability, attribution is tough. As such, warfare in cyberspace poses a unique challenge to national security and the lack of rules to govern it intensifies this challenge.Security in CyberspaceThe United Nations Charter, the Laws of Armed Conflict (LOAC), and other regional arrangements provide a general overarching framework for governments to manage problems of security across all domains. Cyberspace differs from conventional domains of warfare because it functions as both a battlefield and a weapon. It is therefore risky to assume that existing rules of conflict can be extended to cyberspace as well.American political scientist Joseph Nye has discussed the absence of coherence among existing norms that govern cyberspace. Existing practices are based on agreements between private players (largely multinational corporations) with only a mild degree of enforceability. Since providing security is a critical function of government and it is most susceptible to attacks, only governments are properly incentivized to set the rules. Numerous track two groups and various private conferences and commissions continue to work on the development of norms. Successive UN-GGEs (Governmental Groups of Experts) have developed a consensus that the UN Charter and international law apply to cyberspace. But cyberspace is changing faster than countries can legislate internally and negotiate externally.There is no denying that all security efforts need to be collaborative. But as with international terrorism and environmental protection, effective norms and rules can only be set if all stakeholders consensually arrive at what the rules should be. Currently there are two camps on the global stage: a Sino-Russian camp and a rival one comprising the United States, Western Europe, Japan, Australia, and New Zealand. The former espouses the supremacy of national sovereignty in the governance of domestic cyberspace, risk of destabilization by the application of existing international humanitarian law to cyberspace, and the need for new, binding international agreements. The latter advocates for a free and open internet as well as the full applicability of international law (including the right to self-defense, use of countermeasures) to cyberspace. Resolutions sponsoring the formation of the Russia-backed Open Ended Working Group (OEWG) and the UN-GGE 2019-21 were both passed in the United Nations General Assembly in 2018. The UN now has two parallel tracks working toward the establishment of norms in cyberspace. The OEWG is open to all member states and will hold consultations with stakeholders across members, NGOs, and private industry while the UN-GGE is comprised of 25 member states with consultation typically limited to regional organizations. The prevailing atmosphere of mistrust portends further deterioration rather than improvement. This variance between great powers has weighed heavily on international discussion on norms while cyberattacks continue to happen, quietly.There is some scope for optimism yet. At a panel in the recently concluded Internet Governance Forum in Berlin, the Global Commission on the Stability of Cyberspace (GCSC) proposed eight norms including protection of the public core of internet and infrastructure essential to elections, referenda, and plebiscites. This was followed by informal consultations at both the OEWG and UN-GGE in early December. Through the Paris Tech Accords, Digital Geneva Convention, and Charter or Trust, private companies have also sought to play a more active role in the shaping of norms, which is significant as they operate a significant portion of the public internet.What Has India Done So Far?In 2011, India’s proposal for a Committee on Internet Related Policies (CIRP) comprising 50 member states was met with the criticism that it would create an exclusive club. Since then, an analysis of India’s contribution to debates on internet governance by the Center for Internet and Society (India) has revealed a tendency to shift between support for multilateralism and mutli-stakeholderism. Researchers have termed this “nuanced multilateralism,” where a broad range of stakeholders are consulted, but not involved in implementation and enforcement. On the question of cyberspace sovereignty, India seems to share common ground with the Sino-Russian camp, but has refrained from commenting definitively on the issues dividing the two camps. India was one of the member states that backed both UNGA resolutions that resulted in the formation of the OEWG and the UN-GGE (2019-2021). It is also a member of the UN-GGE and has not yet contributed formally to OEWG proceedings. On the multilateral front, it has stayed out of the Osaka Track for Data Governance and the Budapest Convention on Cybercrime.There is no single approach that captures India’s engagement with multilateral institutions. Its rule-taker instinct is evident from India’s support for the United Nations’ peacekeeping operations. Contrary to this is the rule-breaker approach, which is evident from India’s endeavor to be recognized as a nuclear weapon state while also challenging the norms established by the Nonproliferation Treaty. The expectation that India will be a rule-maker all by itself is unrealistic. In the multipolar world that exists today, no single country, let alone India, can become make the only rule-maker. A more achievable goal for India would be to play the role of a rule-shaper, an active voice among rising powers. This goal finds its strength in India’s economic prowess and diplomatic experience in working with alliances.India’s success in shaping the international narrative on climate change has already proven its ability as a rule-shaper. With its upcoming National Cybersecurity Policy (2020-2025), India must look to articulate and justify its position on the applicability of international law to cyberspace. It should bring its domestic policy in line with its global aspirations. Given the importance of private companies in this exercise, it must also consider creating an office of a tech ambassador that will present its position consistently. This level of transparency can serve as an important confidence-building measure as it engages across multiple stakeholders and fora to shape future norms.Shibani Mehta and Prateek Waghre are Research Analysts at The Takshashila Institution, an independent center for research and education in public policy.

This article originally appeared in The Diplomat

Read More
High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

Look at the numbers: Why Digital India can’t afford internet shutdowns with slowing economy

Take a look at these numbers – 3, 5, 6, 14, 31, 79, 134, 91. These are the numbers of documented instances of internet shutdowns in India between 2012 and 2019. The 2019 number will certainly rise during the final weeks of the year as anger against the Citizenship (Amendment) Act and the Bharatiya Janata Party rises.

And yet, as internet shutdowns are reported in Meerut, Aligarh, Malda, Howrah, Assam, Nagaland, one wonders if Narendra Modi government really thinks it can help assuage anger and old resentments.

World over, protesters have always found a way out of any clampdown. In Hong Kong, protesters are using Bridgefy, a service that relies on Bluetooth, to organise.And yet, all governments, irrespective of whether it is the Congress or the BJP or any other party, keep using internet shutdowns as a kill switch. But tech stops for no one. It’s time India thinks beyond shutdowns.

A new era

In almost all cases, mobile internet services were shutdown. For four of the last five years, more than half of these shutdowns have been ‘proactive’ in nature. They have been imposed based either on Section 144 of the CrPC or The Temporary Suspension of Telecom Rules issued by the Ministry of Communications under the NDA government in 2017. While an appeal against the use of the former was struck down by the Supreme Court in 2016, the latter suffers from a lack of transparency and was passed without any consultation with citizens, who are directly affected. Through RTI requests it has also been revealed that many instances of internet shutdowns go undocumented and due process is not always followed.

The willingness and urgency on display to snap communication lines is worrying, especially in ‘Digital India’. Considering that 97 per cent of the estimated 570 million internet users use at least a mobile device access to access the internet, and the growing reliance on connectivity for communication and commerce, this is a severely disproportionate measure. Various studies have pegged the cost of these disruptions from 0.4-2 per cent of a country’s daily GDP to $3 billion for India over a 5-year period ending in 2017.

Since 2017, India has witnessed nearly twice as many shutdowns. Even so, until mid-2019, internet shutdowns predominantly affected parts of Rajasthan and Jammu and Kashmir, both accounting for nearly 250 instances. More importantly, they were rarely imposed in urban centres. In August 2019, a new era began unfolding. First the ongoing internet shutdown in the region of Jammu and Kashmir is the widest sustained disruption ever documented. Second, on the day of the Supreme Court Ayodhya verdict, proactive internet shutdowns were in operation in Aligarh, Agra and Jaipur, signalling a shift in the willingness to deploy them in urban centres. And finally, with ongoing protests against the Citizenship (Amendment) Act, reports have been coming in about internet disruptions in Assam, Tripura, multiple districts in West Bengal, Aligarh and Meerut in Uttar Pradesh, cementing the use of internet shutdowns as the tool of choice.

Diminishing returns

The framework of Radically Networked Societies (RNS) can be used to understand the interplay between protesters and the state. An RNS is defined as a web of connected individuals possessing an identity (real or imagined) and having a common immediate cause. The internet as a medium provided them the ability to scale faster and wider than ever before.With measures like internet shutdowns and curfews, the state aims to increase the time it takes for them to mobilise by restricting information flows. However, such methods are bound to have diminishing returns over time.Snapping communication lines will do little to quell genuine resentment and may conversely encourage people to take to the streets and violate curfews, thereby increasing chances of escalation. Mesh networking apps that operate without internet connectivity will eventually make their way into the toolkit of Indian protesters, like they did in the Hong Kong protests, rendering the argument of shutdowns as an ‘online curfew’ moot.

Better than shutdowns

The Indian State must evolve beyond the use of internet shutdowns. Instead, it should look to address the causes and reduce the time it takes to counter mobilise. There have been some instances of state authorities trying different approaches.In September 2016, when there were protests in Bengaluru over the Cauvery water sharing judgment, instead of shutting down the internet the Bengaluru Police took to Twitter to dispel misinformation and rumours proactively. In the days leading up to the Ayodhya verdict, several police departments were proactively monitoring social media for objectionable messages. While this did not function smoothly on the day of the verdict since the police went on an excessive case registering spree, the Bengaluru example shows that it can work. Future capacity building and training cyber personnel to specifically counter flows of misinformation online must be a consideration going forward.The reaction to viral hoax messages circulating before the Ayodhya verdict warning of surveillance also produced some interesting insight. While more surveillance is never the answer, alternate ways of promoting responsible behaviour should be explored. This could range from encouraging fact-checking of information to political leaders leading by example and not encouraging abusive trolls, misinformation flows themselves. Conflict and polarisation as engagement must be actively discouraged.

Another important step is to counter dangerous speech in society. Research has shown that misinformation/disinformation does not only circulate during specific events. Conditions that exacerbate such flows already exist in society. While the state alone cannot do this, it must nudge the people towards countering it. Such measures must be articulated in the upcoming National Cybersecurity Policy.

Ultimately, that the world’s largest democracy is by far the world leader of such disproportionate tactics should be reason enough for the Indian state to rethink the use of internet shutdowns. But if that doesn’t suffice, the realisation that they come with an expiry date should spur it into fixing the underlying problems unless it wants to live with the diminishing returns that incentivise escalation.The author is a Research Analyst at The Takshashila Institution’s Technology and Policy Programme. Views are personal.This article originally appeared in ThePrint.in

Read More

Are Internet shutdowns healthy for India?

Democratic governments must be accountable to the public and provide a rationale for disrupting Internet services in a timely manner. In the interest of transparency, all governments should document the reasons, time, alternatives considered, decision-making authorities and the rules under which the shutdowns were imposed and release the documents for public scrutiny. This is the way civil society can hold governments to the high standards of transparency and accountability that befits a democracy.Indiscriminate Internet blockades are not likely to safeguard public order in today’s time and age. Indiscriminate shutdowns have high social and economic costs and are often ineffective. A proportionality and necessity test and cost-benefit analysis to determine the right course of action are essential at this juncture. Indian civil society needs to push for a transparent and accountable system which ensures better Internet governance.Read the whole post here.

Read More

Data Protection Bill, an unfinished piece of work

Bill demands age verification and consent from guardians of children for data processing

Shashi Tharoor has a strong case when he says that the personal data protection Bill should have come to the information technology standing committee. It does set a precedent when issues as important as the bill do not go through proper channels of debate. Because of the nature of the Bill, there is a tremendous amount of scope for discourse and disagreement.

Let us begin with the most debated aspect of this legislation, the Data Protection Authority (DPA). Because the mandate of the Bill is so large, it can only go on to set guidelines and give direction on where the data protection space should go. The heavy lifting of enforcement, monitoring, and evaluation has to fall on the shoulders of a different (and ideally independent) body. In this case, it is the DPA that has the duty to protect the interests of data principals, prevent any misuse of personal data, ensure compliance with the act, and promote awareness about data protection. The body needs to enforce the Bill down to auditing and compliance, maintaining a database on the website that has a list of significant data fiduciaries along with a ranking that reflects the level of compliance these fiduciaries are achieving, and act as a form of check and balance to the government.

However, the DPA may end up not being the force of objective balance that it has often been made out in the Bill. Here is why. The body will have a total of 7 members (a chairperson with 6 others). All of them will be appointed by the government, based on the recommendations of the cabinet secretary, secretary to the Government of India in the ministry (or department) dealing with legal affairs, and the secretary to the ministry (or department) of electronics and information technology. All of this falls under the mandate of the executive and has no involvement required from the judiciary or for that matter the legislative. Also, the current version of the Bill does not specify who (or which department) these recommendations will go to in the central government. Is it MeitY? NITI Aayog? PMO? There is no clarity.

One cannot help but notice a pattern here. The Bill itself is going to go to a committee dominated by members of the ruling party and the enforcer is going to be wholly constituted by the executive.

Where is the feedback loop? Or the chance for scrutiny? You could at this point begin questioning how independent the DPA is going to be in its values and actions.

That is not to say that the Bill is all bad. Specifically, it does a good job of laying out the rights of the personal and sensitive personal data of children. And that is not often talked about a lot. The Bill here has a unique approach where it classifies companies that deal with children’s data as guardian data fiduciaries. That is crucial because children may be less aware of the risks, consequences and safeguards concerns and their rights in relation to the processing of personal data. Here the Bill clearly requires these guardian data fiduciaries to demand age verification and consent from guardians for data processing. Also, fiduciaries are not allowed to profile, track, monitor or target ads at individuals under 18.

This is a loss for Facebook. The minimum age to be on the social media platform is 13. And Facebook’s business model is to profile, track, monitor, and micro-target its users. One of two things will happen here. Facebook will either have to change the bar for entry onto the platform to 18 as per the Bill. Or, it will need to ensure that its algorithms and products do not apply to users who are below 13. Either way, expect pushback from Facebook on this, which may or may not result in the section being modified.

The other thing the Bill should add on children’s rights is the requirement to simplify privacy and permissions for children to be consistent with global standards. For instance, the GDPR mandates asking for consent form children in clear and plain language. There is value in making consent consumable for children and for adults. So provisions in this regard should apply not just for children but also for adults, mandating a design template on how and when consent should be asked for.

In sum, the Bill is an unfinished product in so many ways. It has good parts, such as the section on the personal and personal sensitive data of children. However, it needs debate and scrutiny from multiple stakeholders to guide the DPA to be the best version of itself and it is in the government’s hands to make that happen.

Read More

2020 cybersecurity policy has to enable global collaboration

The rapid expansion of digital penetration in India brings with it the need to strengthen cybersecurity. The critical nature of the myriad cyber threats that India faces was underscored by the recent breach at the Kudankulam nuclear power plant and the Indian Space Research Organisation. These were just two of the 1,852 cyber-attacks that are estimated to have hit entities in India every minute in 2019. Symantec’s 2019 Internet Security Threat Report ranks India second on the list of countries affected by targeted attack groups between 2016 and 2018.It’s clear that India faces expanded and more potent cyber threats. Given this fact, the new national cybersecurity policy, set to be announced early next year, should improve on the shortcomings of the previous policy of 2013. The most significant of these were the absence of clear, measurable targets, failure to set standards for the private sector and limited focus on international collaboration.

 In many ways, the broad thrust of the 2013 policy was on point. It argued for the need to build a “secure and resilient cyberspace,” given the significance of the IT sector to foster growth while leading to social transformation and inclusion. This called for creating a “secure computing environment and adequate trust and confidence in electronic transactions, software, services, devices and networks”. Since then, certain steps have been taken to operationalise the policy. These include the establishment of the National Cyber Security Coordination Centre and Cyber Swachhta Kendra along with announcements to set up sectoral and state CERTs and expand the number of standardisation, testing and quality certification testing facilities. However, much more needs to be done and that too at a faster pace.While it is no one’s argument that state capacity can be augmented overnight, setting clear targets can help drive action towards an identified goal. Moreover, the lack of these in the 2013 policy means that it is extremely difficult today to assess whether the policy had the desired impact. Five-year plans are well-written documents, whether or not you agree with the goals they outline for the nation or even if the five-year approach is right at all.The most quantifiable item on the agenda for the 2013 cybersecurity policy was the objective to create a workforce of 500,000 professionals skilled in cybersecurity in the next five years through capacity building, skill development, and training. The objective set a number that one can look at five years from then and see if they exceeded or fell short of expectations. And the data in this regard is sobering. For instance, in 2018, IBM estimated that India was home to nearly 100,000 trained cybersecurity professionals. What’s further alarming is that it estimated the total number needed at nearly three million. The 2020 policy must, therefore, not just identify clear targets but also identify the ways and means through which that target should be met.Almost everything else in the 2013 document was fairly ambiguous. It contained repeated references to adopt and adhere to global standards for cybersecurity. However, there was no clarity on what specific standards should be followed and how long industry should take to adopt them.This brings us to the second shortcoming. The policy at the time was hoping to balance a trade-off between encouraging innovation while ensuring that basic standards for security and hygiene were met. When it comes to the private sector, it repeatedly used words such as “encourage”, “enable” and “promote”, being careful to not make anything mandatory. Even when it did mandate something, say global best practices for cybersecurity to critical infrastructure, it is hard to say how it planned to declare the mandate a success or a failure. This is again a pitfall that the 2020 policy must avoid. The policy must establish or identify standards that the industry should adopt within a fixed timeframe. Also, there is a need for the government to engage with the private sector, particularly when it comes to sharing skills and expertise.Finally, when it comes to international collaboration, the 2013 policy argued for developing bilateral and multilateral relationships in the area of cybersecurity with other countries and to enhance national and global cooperation among security agencies, CERTs, defence agencies and forces, law enforcement agencies and the judicial systems. Since then, India has entered into a bunch of cybersecurity-related MoUs. However, there is an urgent need to set into place domestic frameworks, say for instance with regard to data protection, which will enable broader global collaboration and participation in rule setting. Unfortunately, this has not been happening. For instance, India was not a signatory to the Budapest convention which would have allowed for easier access to data for law enforcement. It also did not enter into an executive agreement under the US-initiated CLOUD Act. On a related note, the government also did not sign the Osaka Track, a plurilateral data sharing agreement proposed at the 2019 G20 Summit. These are important dialogues that India must be part of if it needs to build a resilient and thriving cyber ecosystem.

Read More

Personal Data Protection Bill has its flaws

Data Protection Authority can potentially deal with brokers and the negative externality

Indian tech policy is shifting from formative to decisive. Arguably the biggest increment in this shift comes this week as the Personal Data Protection Bill will (hopefully) be debated and passed by the parliament. The bill itself has gone through public (and private) consultation. But it is still anyone's guess what the final version will look like.

Based on the publically available draft, there is a lot right with the bill. The definitions of different kinds of data are clear, and there is a lot of focus on consent. However, there is not enough focus on regulating data brokers. And that can be a problem. Data brokers are intermediaries who aggregate information from a range of sources. They clean, process, and/or sell data they have. They generally source this data if it is publicly available on the internet or from companies who first hand.

Because the bill does not explicitly discuss brokers, problems lie ahead. Broadly, you could argue that brokers come under either the fiduciary or in India sell lists of people who have been convicted of rape and the list ends up becoming public information.

Similarly, think about cases where databases of shops selling beef, alcoholics or erectile dysfunction are released into the wild. The latter two are instances the US is somewhat familiar with. A data broker can ask its clients to not re-sell the data, or expect certain standards of security to be maintained. But there is no way to logistically ensure that the client is going to adhere to this in a responsible manner. The draft bill talks about how to deal with breaches and who should be notified. But breaches are, by definition, unauthorised. A data broker’s whole business model is selling or processing data. All of which is legal. So, how should the

Indian government be looking at keep data brokers accountable? Some would argue that the answer may lie in data localisation. But localisation will only ensure that data is stored/processed domestically. Even if the broker is located domestically, it doesnt matter unless there is provision in law for mandating accountability.

The issue around brokers is also unlikely to be handled in the final version of the bill. Even though it is important and urgent, it does not take precedence over more fundamental issues. What is going to happen here is that data brokers and their activities are going to be subject to the mandate of the Data Protection Authority (DPA) due to be formed after the bill is passed.

Once the DPA is formed, there are a few ways in which it can potentially deal with brokers and the negative externality their role brings.

One option could be to hold data brokers accountable once a breach has occurred and a broker has been identified as culpable. The problem here is that data moves fast. By the time there is a punitive measure in response to a breach, the damage may have already been done. In addition, such a measure would also encourage brokers to hide traces of the breaches that lead to them.

Another alternative could be to ask every data broker to register themselves.

But that would mean more data brokers being incentivised to move out of the country while maintaining operations in India.

Rohan is a technology policy analyst at The Takshashila Institution.

This article was first published in Deccan Chronicle.

Read More

A small step for data protection, big leap awaited

It is an exciting time to be in the Indian tech policy space right now. The government has listed the Personal Data Protection Bill in Parliament for the winter session. The Union Cabinet has aprroved the Bill and it is likely to be introduced for discussion before the on-going winter session of Parliament ends on December 13.

Going forward, this Bill will update the currently non-existent standards for privacy and consent. The law will (as stated in the draft Bill prepared by a high-level committee headed by former Supreme Court judge, B N Srikrishna ), also set up a data protection authority. As these developments occur, and India begins to set its own standards in the space, it is important to keep in mind that this milestone is the beginning for stronger data protection, and not the end.

One of the most important aspects of the Bill is the setting up of the data protection authority (DPA). While the draft Bill sets up broad principles for privacy, a huge chunk of the work has been left for the DPA to carry forward. There are big-ticket items that need to be resolved while keeping in mind the larger vision for data protection in India. For instance, the authority will need to establish and enforce conditions on which personal data can be collected, accessed, and processed without consent. The DPA will need to be the policy formulator as well as the enforcer. Given the pace of progress in technology, the DPA will also need to be proactive in its approach rather than reactive. All of this means that the authority is always going to be strapped for capacity and will need to have appointments whose values align with that of the law’s larger vision. It is a thankless task to manage trade-offs between privacy and innovation in a country like India. That is what the bill is formally setting in motion through establishing the DPA.

Momentous as the Bill’s passage will be, it is crucial to note that this will not automatically mean that personal data is safeguarded going forward. There is potentially a 12-month period between the date it is signed-off by the President and when it is finally notified by the Central Government. This can be followed by a 3 month period to establish the Data Protection Authority and another nine to fifteeen months for all provisions to come into effect. Cumulatively, this could mean that it may be more than two years after it receives Presidential assent before there is a fully functional data protection regime in place. The process could conclude earlier, but given the complexity of the tasks at hand it is not unreasonable to expect that most of the allowed timelines will have to be utilised.

As with any policy, the outcomes will depend on how effectively it can be implemented. Much has already been written about the drawbacks of a consent-based model resulting in consent-fatigue. The Bill calls for privacy by design, but ensuring accountability will be difficult since most design decisions are opaque. A recent study on the EU’s General Data Protection Regulation (GDPR) and ePrivacy Directive violations revealed that 54 per cent of websites tested were non-compliant. Also, considering the number of data fiduciaries (not limited to the online world) one can interact with on a daily basis, a person may never find out if their personal data has been misused, or which entity is responsible. The bill proposes mechanisms for addressing grievances. It also requires entities that handle large volumes of user data to undergo audits and assessments. How responsive and transparent these processes turn out to be will indicators of how efficient the policy is.There have only been limited studies on privacy in the Indian context but the most existing literature points to the collectivist nature of society to explain the low levels of privacy consciousness. While awareness is growing, if people display a high level of apathy towards ensuring protection of their personal data it may push data fiduciaries down the path of non-compliance.

The government should table the Bill at the earliest to allow sufficient time for discussing the finer aspects of the Bill on the floor of the house. The number of questions posed to MEITY on the topic of privacy and data protection indicates a high degree of interest in Parliament on the subject. The government should also endeavour to remain as transparent as possible when framing the remaining provisions. Simultaneously, society should not slide into complacency after the passage of the Bill. Instead, it must continue to stay engaged to ensure that we have a strong data protection regime that succeeds in safeguarding Indians’ fundamental right to privacy.

(Rohan Seth and Prateek Waghre are technology policy analysts at The Takshashila Institution)

This article was originally published in Deccan Herald.

Read More

Joining a New Social Media Platform Does Not Make Sense

Mastodon is what’s happening in India right now. Indian Twitter users are moving to the platform and have taken to using hashtags such as #CasteistTwitter and #cancelallBlueTicksinIndia. A key reason for this to transpire is that Twitter has been, to put it mildly, less than perfect, in moderating content in India. There is the incident with lawyer Sanjay Hegde that caused this to blow up, along with accusations that Twitter had been blocking hundreds and thousands of tweets in India since 2017 with a focus on accounts from Kashmir.Enter Mastodon. The platform, developed by developer Eugen Rochko, is opensourced, so no one entity gets to decide what content belongs on the communities there. Also, the data on Mastodon is not owned by one single corporation, so you know that your behavior on there is not being quantified and being sold to people who would use that to profile and target you.Plus, each server (community) has a relatively small size with a separate admin, moderator, and by extension, code of conduct. All of this sounds wonderful. The character count is also 500 words as opposed to 280 (if that is the sort of thing you consider to be an advantage).Mastodon is moving the needle forward by a significant increment when it comes to social networking. The idea is for us to move towards a future where user data isn’t monetised and people can host their own servers instead. As a tech enthusiast, that sounds wonderful and I honestly wish that this is what Twitter should have been.Keeping all of that in mind, I don’t think I will be joining Mastodon. Hear me out. A large part of it is not because Mastodon does not have its own problems, let’s set that aside for now and move on to the attention economy. Much like how goods and services compete for a share of your wallet, social media has for the longest time been competing for attention and mind-space. Because the more time you spend on the platform, the more ads you will see and the more money they will make. No wonder it is so hard to quit Instagram and Facebook.Joining a new platform for social media today is an investment that does not make sense unless the other one shuts down. There is a high chance of people initially quitting Twitter, only to come back to it while being addicted to another platform. The more platforms you are on, the thinner your attention is stretched. That is objectively bad for anyone who thinks they spend a lot of time on their phone.If you’re lucky to be one of the few people who do not suffer from that and are indifferent to the dopamine that notifications induce in your brain, this one doesn’t apply to you. Then there is the network effect and inertia. I for one, am for moving the needle forward little by little. But here, there is little to gain right now, with more to lose.Network effects are when products (in this case, platforms), gain value when more people use them. So, it makes sense for you to use WhatsApp and not Signal, as all your friends are on WhatsApp. Similarly, it makes sense for you to be on Twitter as your favorite celebs and news outlets are on there. Mastodon does not have the network effect advantage, so most people who do not specifically have their network on Mastodon, do not get a lot of value out of using it.In addition, there is inertia. Remember when we set aside Mastodon’s problems earlier, here is where they fit in. Mastodon is not as intuitive as using Twitter or Facebook. That makes it a deal-breaker for people of certain ages and also happens to be a significant con for people who don’t want to spend a non-trivial chunk of their time learning about servers, instances, toots, and so on.There also isn’t an official Mastodon app, however, there are a bunch of client apps that can be used instead, most popular among them is Tusky, but reviews will tell you that it is fairly buggy and that is to be expected. There is so much right with Mastodon. It is a great working example of the democratisation of social media. It also happens to exist in an age where it would be near impossible to get funding for or to start a new social media platform. The problem is that for people who don’t explicitly feel the need or see the value in joining Mastodon, are unlikely to split their attention further by joining a new platform. The switching costs, network effects, and inertia are simply too high.Rohan is a policy analyst at The Takshashila Institution and the co-author of Data Localization in a Globalized World: An Indian Perspective.This article was first published in Deccan Chronicle.

Read More

How to respond to an 'intelligent' PLA

Advancements in Artificial Intelligence (AI) technologies over the next decade will have a profound impact on the nature of warfare. Increasing use of precision weapons, training simulations and unmanned vehicles are merely the tip of the iceberg. AI technologies, going forward, will not only have a direct battlefield impact in terms of weapons and equipment but will also impact planning, logistics and decision-making, requiring new ethical and doctrinal thinking. From an Indian perspective, China’s strategic focus on leveraging AI has serious national security implications.Read the full article on the Deccan Herald website.

Read More
High-Tech Geopolitics Prateek Waghre High-Tech Geopolitics Prateek Waghre

Lessons from Facebook and Twitter's Political Ads Policies

Over the course of the last few weeks, we have seen Facebook and Twitter take opposing views on the issue of political ads. While the issue itself does not have an immediate implication for Indian politics, the decisions of the two companies, their actions throughout the episode and reactions to them are emblematic of the larger set of problems surrounding their policies. They serve as a reminder that we should not expect these platforms to be neutral places in the context of public discourse solely through self-regulation.

In late October, Facebook infamously announced that it would not fact-check political ads. Shortly after that, Twitter’s CEO Jack Dorsey announced via Twitter that the company would not allow any political ads after November 22. And though Twitter is not alone in this approach, its role in public discourse differs from other companies like LinkedIn, TikTok etc. that already have similar policies. Google is reportedly due to announce its own policy soon. At face-value, it may seem that one of these approaches is far better than the other, but a deeper look brings forth the challenges both will find hard to overcome. Google, meanwhile, announced a new political ads policy on November 20. Its policy aims to limit micro-targeting across search, display and YouTube ads. Crucially, it reiterated that no advertisers (political or otherwise) are allowed to make misleading claims.

Potential for misuse

To demonstrate the drawbacks of Facebook’s policy, US lawmaker Elizabeth Warren’s Presidential campaign deliberately published an ad with a false claim about Facebook CEO Mark Zuckerberg. In another instance, Adriel Hampton, an activist, signed up as a candidate for California’s 2022 gubernatorial election so that he could publish ads with misleading claims (he was ultimately not allowed to do so).

While Twitter’s policy disallows ads from candidates, parties and political groups/ political action committees (PACs), Facebook claims it will still fact-check ads from PACs. For malicious actors determined to spread misinformation/disinformation through ads, these distinctions will not be much of an impediment. They will find workarounds.

While most conversation has been US-centric, both companies have a presence in over 100 countries. A significant amount of local context and human-effort is required to consistently enforce policies across all of them. The ongoing trend to substitute human oversight with machine learning could limit the acquisition of local knowledge. For e.g. does Facebook's policy of not naming whistle-blowers work in every country it has a presence in?

Notably, both companies stressed how little an impact political ads had on their respective bottom-lines. Considering the skewed revenues per user for North America + Europe compared with Asia Pacific + rest of the world, the financial incentive to enforce such resource-intensive policies equitably is limited. Both companies also have a history of inconsistent responses to moral panics resulting in an uneven implementation of their policies.

A self-imposed ban on political ads by Facebook and Twitter in Washington to avoid dealing with complex campaign finance rules has resulted in uneven enforcement and a complicated set of rules that have proven advantageous to incumbents. In response to criticism that these rules will adversely impact civil society and advocacy groups, Twitter initially said ‘cause-based ads’ won’t be banned and ultimately settled on limiting them by preventing micro-targeting. Ultimately, both approaches are likely to favour incumbents or those with deeper pockets.

Fixing Accountability

The real problems for Social Media networks go far beyond micro-targeted political advertising and the shortcomings across capacity, misuse and consequences apply there as well. The flow of misinformation/disinformation is rampant. A study by Poynter Institute highlighted that misinformation/disinformation outperformed fact-checks by several orders of magnitude. Research by Oxford Internet Institute and Freedom House has revealed the use disinformation campaigns online and the co-option of social media to power the shift towards illiberalism by various governments. Conflict and toxicity now seem to be features meant to drive engagement. Rules are implemented arbitrarily and suspension policies are not consistently enforced. The increased usage of machine learning algorithms (which can be gamed by mass reporting) in content moderation is coinciding with the reduction in human oversight.

Social Media networks are classified as intermediaries which grants them safe-harbour, implying that they cannot be held accountable for content posted on them by users. Intermediary is a very broad term covering everything from ISPs, Cloud services to end-user facing websites/applications across various sectors. Stratechery, a website which analyses technology strategy, proposes a framework for content moderation such that both discretion and responsibility is higher the closer a company is to an end-user. Therefore, for platforms like Facebook/Twitter/YouTube etc. there should be more responsibility/discretion than ISPs/Cloud services providers. It does not explicitly call for fixing accountability, which cannot be taken for granted.

Unfortunately, self-regulation has not worked in this context and their status as intermediaries may require additional consideration. Presently, India’s proposed revised Intermediary Guidelines already tend towards over-regulation to solve for the challenges posed by Social Media companies, adversely impacting many other companies. The real challenge for policy-makers and society in countries like India is to strike the balance between holding large Social Media networks accountable while not creating rules that are so onerous they can be weaponised into limiting freedom of speech.

(Prateek Waghre is a Technology-Policy researcher at Takshashila Institution. He focuses on the governance of Big Tech in Democracies)

This article was originally published on 21st November 2019, in Deccan Herald.

Read More

We Need Our Own Honest Ads Act

Recent developments in online advertising have been uplifting. Facebook (and by extension, Instagram) has been running a policy that is meant to block predatory ads that target people who are overweight or have skin conditions, pushing unusual and often medically dangerous miracle cures. Google, which makes over $100 billion in online ad revenue, has also released a statement declaring a ban on ads that are selling treatments that have no established biomedical and scientific basis. Twitter also declared that it won’t be accepting ads from state-controlled media entities.This is not to say that the advertising policies of these companies are perfect, as incidents reported by The Verge and CNBC will tell you. However, things have been improving at a steady pace as far as advertising policies are concerned.A major catalyst for this change has been the 2016 US election that saw the potential of online advertising abused for targeting voters. Since then, there has been bipartisan support in the US to achieve greater transparency in online advertising. This includes disclosing who paid for public ads, how many people saw those ads, and how the purchaser can be contacted.There are two problems with the support for greater transparency in advertising. Firstly, the bi-partisan push never ended up becoming law. Secondly, even if it did end up becoming law, its impact would have been limited to the US.It is an interesting story why we still lack a law that enforces greater transparency in advertising, and much of it revolves around Facebook, with its conclusion set to impact other players in online advertising. The bill, called the Honest Ads Act, was introduced in the Senate in 2017.Had it become law, it’s success or failure would have given other countries a template to work with to achieve greater transparency in advertising. As of now, that will need to continue without precedent. Days after the bill was introduced, Facebook announced that it would be updating its Advertising Transparency and Authenticity Efforts.Mark Zuckerberg declared his support for the Honest Ads Act through a separate Facebook post, stating, “Election interference is a problem that’s bigger than any one platform, and that’s why we support the Honest Ads Act”. Important side note, Twitter also announced its decision to back the Act, but the focus here is on Facebook because of its size, position, and role in the 2016 US election.Once Facebook expressed its support for the act, and declared the intent to self-regulate according to the bill, the issue lost momentum. At the time, Zuckerberg’s testimony at Capitol Hill was impending, and the news cycle shifted its attention. Senate Majority Leader Mitch McConnell, brought in the first amendment into the argument, saying he was sceptical of proposals (like the Honest Ads Act) that would penalize American citizens trying to use the internet and to advertise. At this point, you could just make the argument that in retrospect, Facebook could have supported the Honest Ads Act by not declaring its support.Regardless, the implications of these events impacted players across a wide spectrum. Because there was no legal requirement to do so, other avenues of online ads (read, Twitter, Google) did not need to comply with a set standard that could be used as a yardstick to judge them against. In addition, the problem with the freedom of speech argument is that transparency in ads is not directly impacting free speech. You could extend the same argument to revoke the laws that mandate transparency in TV and radio ads in the US. So where is the crackdown on transparency in TV and Radio?The Honest Ads Act is relevant as it had the potential to set the tone for how transparent the regulation should be in other countries.The US is not the most significant user base for these platforms. And as you might expect, having transparency in political ads could be useful for other countries that also hold elections. For example, India has over 270 million Facebook users, a significant percentage of whom participated in the general elections. Understandably, advertising on social media sites such as Facebook was an integral part of most campaign strategies. So, it would help to have a law that helps voters identify who is paying for what political ad, and conversely, which of them might be facts, and which of them might be false propaganda.Asking online ad companies such as Facebook to regulate themselves will have exactly the effect that it is having now. They will move towards better ad and transparency policies at their own pace, influenced by what the prevailing narrative is. And for most countries, that is not enough.Having a law in countries where these platforms operate is more efficient. It is not just the United States that needs its ads to be honest.The writer is a Research Analyst with Takshashila Institution, Bengaluru.This article was first published in Deccan Herald.

Read More
High-Tech Geopolitics, Economic Policy Prateek Waghre High-Tech Geopolitics, Economic Policy Prateek Waghre

Why we must be vigilant about mass facial surveillance

The recent revelations about NSO group’s Pegasus being used to target an estimated two dozen Indian lawyers and activists using the vulnerabilities in Whatsapp have once again brought the issue of targeted surveillance of citizens into focus. As the saying goes, no good crisis should go to waste. This is an opportunity to raise public awareness about trends in mass surveillance involving Facial Recognition systems and CCTV cameras that impact every citizen irrespective of whether or not they have a digital presence today.

The Panoptican, conceptualised by philosopher Jeremy Bentham, was a prison designed in a way that prisoners could be observed by a central tower, except they wouldn’t know when they were being watched, forcing them to self-regulate their behaviour. Michel Foucault later extended this idea stating that modern states could no longer resort to violent and public forms of discipline and needed a more sophisticated form of control using observation and surveillance as a deterrent.

Live Facial Recognition combined with an ever expanding constellation of CCTV cameras has the potential to make this even more powerful. Therefore, it suits governments around the world, irrespective of ideology, to expand their mass surveillance programs with stated objectives like national security, identification of missing persons etc. and in the worst cases, continue maximizing these capabilities to enable the establishment of an Orwellian state.

Global trends
China’s use of such systems is well documented. As per a study by the Journal of Democracy, there will be almost 626 million CCTV cameras deployed around the country by the end of 2020. It was widely reported in May that its Facial recognition database includes nearly all citizens. Facial recognition systems are used in public spaces for purposes ranging from access to services (hotels/flights/public transport etc) to public shaming of individuals for transgressions such as jaywalking by displaying their faces and identification information on large screens installed at various traffic intersections and even monitoring whether students are paying attention in class or not.

The former was highlighted by an almost comedic case in September, where a young woman found that her access to payment gateways, ability to check in to hotels/trains etc. was affected after she underwent plastic surgery. In addition, there is also a fear that Facial Recognition technology is being used to surveil and target minorities in Xinjiang province.

In Russia, Moscow mayor Sergei Sobyanin has claimed that the city had nearly 200,000 surveillance cameras. There have also been reports that the city plans to build AI-based Facial Recognition into this large network with an eye on the growing number of demonstrations against the Putin government.

Even more concerning is the shift by countries that have a ‘democratic ethos’ to deploying and expanding their usage of such systems. Australia was recently in the news for advocating face scans to be able to access adult content. Some schools in the country are also running a trial of the technology to track attendance. France is testing a Facial Recognition based National ID system. In the UK, the High Court dismissed an application for judicial review of automated facial recognition. The challenge itself was a response to pilot programs run by the police, or installation of such systems by various councils, as per petitioners, without the consent of citizens and a legal basis.

There was also heavy criticism of Facial Recognition being used at football games and music concerts. Its use in personal spaces, too, continues to expand as companies explore potential uses to measure employee productivity or candidate suitability by analysing facial expressions.

There are opposing currents as well – multiple cities in the US have banned/are contemplating preventing law enforcement/government agencies from deploying the technology. Sweden’s Data Protection Authority fined a municipality after a school conducted a pilot to track attendance on the grounds that it violated EU’s General Data Protection Regulation (GDPR).

Advocacy groups like the Ada Lovelace Institute have called for a moratorium on all use of the technology until society can come to terms with its potential impact. Concerns have been raised on grounds that the accuracy of such systems is currently low, thus severely increasing the risk of misidentification when used by law enforcement agencies. Secondly, since the technology will learn from existing databases (e.g. a criminal database), any bias reflected in such a database such as disproportionate representation of minorities will creep into the system.

Also, there is limited information in many cases where and how such systems are being used. Protestors in Hong Kong and, recently, Chile, have shown the awareness to counter law enforcement’s use of Facial Recognition by targeting cameras. The means have varied from the use of face-masks/clothing imprinted with multiple faces to pointing numerous lasers at the cameras, and even physically removing visible cameras.

India’s direction
In mid-2019, the National Crime Records Bureau of India put out a tender inviting bids for an Automated Facial Recognition System (AFRS) without any prior public consultation. Meeting minutes of a pre-bid seminar accessed by the Internet Freedom Foundation indicated that there were 80 vendor representatives present. 

Convenience is touted as the main benefit of various pilot programs to use ‘faces’ as boarding cards at airports in New Delhi, Bengaluru and Hyderabad as part of the Civil Aviation Ministry’s Digi Yatra program. Officials have sought to allay privacy concerns stating that no information is stored. City police in New Delhi and Chennai have run trials in the past. Hyderabad police has until recently, routinely updated their Twitter accounts with photos of officers scanning people’s faces with cameras. Many of these posts were deleted after independent researcher Srinivas Kodali repeatedly questioned the legality of such actions.

Many of the afore mentioned trials reported low single figure accuracy rates for Facial Recognition. The State of Policing in India (2019) report by Lokniti and Common Cause indicated that roughly 50 per cent of personnel believe that minorities and migrants and ‘very likely’ and ‘somewhat’ naturally prone to committing crimes. These aspects are concerning when considering capability/capacity and potential for misuse of the technology. False-positives as result of a low accuracy rate, combined with potentially biased law enforcement and a lack of transparency, could make it a tool for harassment of citizens.

Schools have attempted to use them to track attendance. Gated communites/offices already deploy a large number of CCTV cameras. A transition to live Facial Recognition is an obvious next step. However, given that trust in tech companies is at a low, and the existence of Facial Recognition training datasets such as Megaface (a large dataset utilised to train Facial Recognition algorithms using images uploaded on the Internet as far back as the mid 2000s without consent) – privacy advocates are concerned.

Opposition and future considerations for society
Necessary and Proportionate, a coalition of civil society organisations, privacy advocates around the world, proposes thirteen principles on application of human rights to communication surveillance, many of which are applicable here as well. To state some of them – legality, necessary and legitimate aims, proportionality, due process along with judicial and public oversight, prevention of misuse and a right to appeal. Indeed, most opposition from civil society groups and activists against government use of mass surveillance is on the basis of these principles. When looked at from the lenses of intent (stated or otherwise), capacity and potential for misuse – these are valid grounds to question mass surveillance by the governments.

It is also important for society to ask and seek to answer some of the following questions: Is the state the only entity that can misuse this technology? What kind of norms should society work towards when it comes to private surveillance? Is it likely that the state will act to limit its own power especially if there is a propensity to both accept and conduct indiscriminate surveillance of private spaces, as is the case today? What will be the unseen effects of normalising mass public and private surveillance on future generations and how can they be empowered to make a choice?

This article was first published in Deccan Herald on 11th November, 2019. 

Read More

Govt needs to be wary of facial recognition misuse

India is creating a national facial recognition system. If you live in India, you should be concerned about what this could lead to. It is easy to draw parallels with 1984 and say that we are moving towards Big Brother at pace, and perhaps we are. But a statement like that, for better or worse, would accentuate the dystopia and may not be fair to the rationale behind the move. Instead, let us sidestep conversations about the resistance, doublethink, and thoughtcrime, and look at why the government wants to do this and the possible risks of a national facial recognition system.

WHY DOES THE GOVERNMENT WANT THIS?

Let us first look at it from the government’s side of the aisle. Having a national facial recognition database can have a lot of pros. Instead of looking at this like big brother, the bestcase scenario is that the Indian government is looking at better security, safety, and crime prevention. It would aid law enforcement. In fact, the request for proposal by the National Crime Records Bureau (NCRB) says as much, ‘It (the national facial recognition system) is an effort in the direction of modernizing the police force, information gathering, criminal identification, verification and its dissemination among various police organizations and units across the country’.

Take it one step further in a world where later down the line, you could also use the same database to achieve gains in efficiency and productivity. For example, schools could have attendance based on FaceID-like software, or checking for train tickets would be more efficient (discounting the occasional case of plastic surgery that alters your appearance significantly enough).

POTENTIAL FOR MISUSE

The underlying assumption for this facial recognition system is that people implicitly trust the government with their faces, which is wrong. Not least because even if you trust this government, you may not trust the one that comes after it. This is especially true when you consider the power that facial recognition databases provide administrations.

For instance, China has successfully used AI and facial recognition to profile and suppress minorities. Who is to guarantee that the current or a future government will not use this technology to keep out or suppress minorities domestically? The current government has already taken measures to ramp up mass surveillance. In December last year, the Ministry of Home Affairs issued a notification that authorized 10 agencies to intercept calls, data on any computer.

WHERE IS THE CONSENT? Apart from the fact that people cannot trust all governments across time with data of their faces, there is also the hugely important issue of consent and absence of legality. Facial data is personal and sensitive. Not giving people the choice to opt-out is objectively wrong.

Consider the fact that once such a database exists, it is will be combined with state police across the country, it says as much in the proposal excerpt mentioned above. There is every chance that we are looking at increased discrimination in profiling with AI algorithms repeating the existing biases.

Why should the people not have a say in whether they want their facial data to be a part of this system, let alone whether such a system should exist in the first place?

Moreover, because of how personal facial data is, even law enforcement agencies should have to go through some form of legal checks and safeguards to clarify why they want access to data and whether their claim is legitimate.

Data breaches would have worse consequences

Policy, in technology and elsewhere, is often viewed through what outcomes are intended and anticipated. Data breaches are anticipated and unintended. Surely the government does not plan to share/sell personal and sensitive data for revenue. However, considering past trends in Aadhaar, and the performance of State Resident Data Hubs goes, leaks and breaches are to be expected. Even if you trust the government to not misuse your facial data, you shouldn’t be comfortable with trusting third parties who went through the trouble of stealing your information from a government database.

Once the data is leaked and being used for nefarious purposes, what even would remedial measures look like? And how would you ensure that the data is not shared or misused again? It is a can of worms which once opened, cannot be closed.

Regardless of where on the aisle you stand, you are likely to agree that facial data is personal and sensitive. The technology itself is extremely powerful and thus, can be misused in the wrong hands. If the government builds this system today, without consent or genuine public consultation, it would be almost ensuring that it or future administrations misuse it for discriminatory profiling or for suppressing minorities. So if you do live in India today, you should be very concerned about what a national facial recognition system can lead to.

This article was first published in The Deccan Chronicle. Views are personal.

The writer is a Policy Analyst at The Takshashila Institution.

Read More