Commentary
Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy
Intermediary guidelines might infringe on privacy
This article was first published in Deccan Chronicle.
If you try to keep up to date with the tech policy debates in India, intermediary liability is one of those few topics you cannot escape. In very oversimplified terms, the debate here is whether companies like Facebook should be held accountable for the content that is posted on them.
The Ministry of Electronics and Information Technology (MeitY) came up with proposed changes to the intermediary guidelines back in December 2018. Since then, discourse around the topic has been rife and the new, finalised guidelines are speculated to come out in the next few weeks.
So when Bloomberg reported that MeitY is expected to put out the new rules later this month without ‘any major changes’, speculation around the guidelines was replaced by concern.
One of the most contentious clauses of the intermediary guidelines was to make messages and posts traceable to their origins. That would mean WhatsApp would need to use its resources to track where a message was originating from and then report that to the government.
As The Verge puts it, tech companies could essentially be required to serve as deputies of the state, conducting investigations on behalf of law enforcement, without so much as a court order.
That is deeply troubling. In contemporary India, we have either begun to take secure messaging for granted or just do not think about how secure our communications are today.
Here context matters. More often than not, when I talk about privacy and end-to-end encryption, I get glazed eyes. That is understandable. People find it hard to understand how encryption impacts their lives. But humor me in a thought experiment. As you read this, take a look around you. Take a good look at the person physically closest to you at this moment and ask yourself whether you would be okay with disabling the security on your phone and giving it to them for three days. If the thought of doing that makes you even slightly uncomfortable, you now understand why privacy matters.
Using the new rules, privacy is now going to be chipped away for anyone in India who uses WhatsApp (or any other end-to-end encrypted service). Add to that the fact that India today does not have the strongest of institutions. The issue of whether NaMo TV was a governance tool or a political one taught us that. So if there is anything that the political climate tells us today is that there is a very real chance that these guidelines can and will be used for political gain.
The other side to the story also is that these are intermediary guidelines and do not apply to just platforms. The intermediary is a broader term and encompasses not just platforms such as Facebook, Telegram, or Signal, but also cloud service providers, ISPs, and even cybercafés.
Not all of these players have equal access to information when ordered by law enforcement agencies to disclose it. A consultation report released by Medianama listed instances of harassment of intermediaries.
According to the report, ISPs claimed to live in a constant situation of threat and made to feel like criminals for running their businesses. During raids, people and their families were often asked to part with their phones and electronics, along with access to their passwords. In fact, according to the report, when a cloud service provider for an app in Andhra Pradesh was approached by the Police with a request for information, it had to go out of business being unable to comply. Not all intermediaries are created equal and these guidelines do not acknowledge that.
But the broader problem I see here is that there is no problem statement here. What these guidelines are trying to address is not clear. If the agenda is for the law enforcement agencies to information on digital communications (and that is essential to maintain law and order), it does not make sense to do it through these means. There are international provisions that India can and should resort to address (CLOUD Act in particular).
Once we go down this route, there is a non-zero chance that intermediaries such as WhatsApp might stop providing their services in India. Especially since if they comply, it will set precedent for other countries to follow India in their approach to breaking encryption. This could end up causing these intermediaries to serve as government lieutenants. Regardless of what platform we choose to communicate on, we need to value privacy going forward. If you disagree with the idea, now might be a great time to unlock your phone and hand your phone over to the person physically closest to you.
Using the new rules, privacy is now going to be chipped away for anyone in India who uses any end-to-end encrypted service.
The intermediary is a broad term and encompasses not just platforms but also cloud service providers, ISPs, and even cybercafés.
The writer is a research analyst at The Takshashila Institution All views are the author’s own.
NRC website imbroglio highlights need for govt accountability
This article was first published in Deccan Chronicle.
Last week, multiple news outlets reported that the website housing NRC data had gone offline. Reportedly, this happened because a cloud services contract procured by Wipro on behalf of the State government of Assam was not renewed and thus, turned off due to non-payment. For now, officials have made assurances that the data itself is safe. Some aspersions have also been cast on former state officials working on the NRC project. This is still a developing story and there are multiple conspiracy theories being floated about the root-cause ranging on a spectrum from malintent to negligence and good old-fashioned incompetence.
From a public policy perspective, there are multiple questions that come up — should the state be contracting with private enterprise? How accountable should the state be when there is a loss of data or harm caused to people by accumulating this data? How much data should the state gather about its citizens and the potential for misuse? Let’s look at them starting from the narrowest question and then expanding outwards.
AWS V/S MEGHRAJ One of the reasons for outrage has been the use of Amazon Web Services to host this site especially when the National Informatics Center (NIC) itself offers a cloud service called ‘MeghRaj’. The concern cited is that the data may leave the country, or that private contractors will potentially be able to access sensitive data. It is almost cliched to say that the Internet has no borders, but this distinction is important. Data is not any safer just by virtue of it being in India and at a state-operated facility. On the contrary, it is probably better for a website and its data to be hosted with industry-leading operators that follow best-practices and have the expertise to efficiently manage both operations and security. One must consider both the capacity and role of the state in this context. What is the market failure that the state is addressing by offering cloud hosting services in a market where the likes of Amazon, Google, and Microsoft operate?
The objection regarding contractor access to sensitive information is important and merits further consideration. To a large extent, this can be addressed by a contractual requirement to restrict access to individuals with security clearances. Yes, this brings the dimension of a principal-agent problem and lax enforcement of contract law in India. But it is important to contrast it with the alternative — an individual representing the state, where the principal-agent problem is even more acute. As things stand, there are still options to hold a private entity accountable for violation of contract, but there is a lower probability of punitive action against an individual representing the state for harm arising out of action/inaction on their part. As far as causes for outrage go, the fact that the data was stored with AWS should not be one. There are larger aspects at play here.
STATE ACCOUNTABILITY This incident brings with itself a much larger question on the accountability the Government should have towards data. The Indian government keeps a substantial amount of personal and sensitive data on its citizens. For example, data on how much gas you consume, your physical address, the model, make, and the number of your car as well as how many times you traveled out of the country in the last 10 years. That is more sensitive information than most companies in the private sector hold.
Keeping this (and the social contract) in mind, how accountable should the government be? According to the draft of the Personal Data Protection Bill, not very. Section 35 of the bill allows the Government to exempt whole departments from the bill, removing checks and balances that should exist when the Government acts as a collector or processor of your data.
How does that make sense? Why should the state be any less accountable than a private enterprise? In fact, the Government has sold the data of its citizens, without their consent (~25 crore vehicle registrations and 15 crore driving licenses) to the private sector for revenue. As of now, it is hard to conclude whether the incident occurred due to malintent, negligence, or incompetence. But regardless of the cause, it brings with itself a lesson. The Government and all its departments need to be more responsible and be held more accountable when it comes to the data they store and process.
IMPLICATIONS OF A DATA-HUNGRY STATE A case can be made that the state is not a monolith and there exist certain barriers and redundancies due to which databases in the Government do not talk to each other… yet. Chapter 4 of the 2018-19 Economic Survey of India envisioned data as a public good and advocated “combining … disparate datasets.” The combination of limited state capacity, lack of accountability, and a hunger for data can be a dangerous one. While capacity can be supplemented by private enterprise, there is no substitute for accountability. In such a scenario it is extremely important to consider, understand, debate the chronology, implications, and potential for misuse before going ahead with such large-scale activities that could end up severely disrupting many millions of lives.
Section 35 of the draft Personal Data Protection Bill allows the government to exempt whole departments from the Bill, removing checks and balances that should exist when the government acts as a collector or processor of your data.
(The writers are research analysts at The Takshashila Institution All views are the author’s own and are personal.)
Tackling Information Disorder, the malaise of our times
This article was originally published in Deccan Herald.
The term ‘fake news’ – popularised by a certain world leader – is today used as a catch-all term for any situation in which there is a perceived or genuine falsification of facts irrespective of the intent. But the term itself lacks the nuance to differentiate between the many kinds of information operations that are common, especially on the internet.
Broadly, these can be categorized as disinformation (false content propagated with the intent to cause harm), misinformation (false content propagated without the knowledge that it is false/misleading or the intention to cause harm), and malinformation (genuine content shared with a false context and an intention to harm). Collectively, this trinity is referred to as ‘information disorder’.
Over the last 4 weeks, Facebook and Twitter have made some important announcements regarding their content moderation strategies. In January, Facebook said it was banning ‘deepfakes (videos in which a person is artificially inserted by an algorithm based on photos) on its platform. It also released additional plans for its proposed ‘Oversight Board’, which it sees as a ‘Supreme Court’ for content moderation disputes. Meanwhile, in early February, Twitter announced its new policy for dealing with manipulated media. But the question really is whether these solutions can address the problem.
Custodians of the internet
Before dissecting the finer aspects of these policies to see if they could work, it is important to unequivocally state that content moderation is hard. The conversation typically veers towards extremes: Platforms are seen to be either too lenient with harmful content or too eager when it comes to censoring ‘free expression’. The job at hand involves striking a difficult balance and it’s important to acknowledge there will always be tradeoffs.
Yet, as Tarleton Gillespie says in Custodians of the Internet, moderation is the very essence of what platforms offer. This is based on the twin-pillars of personalisation and the ‘safe harbour’ that they enjoy. The former implies that they will always tailor content for an individual user and the latter essentially grants them the discretion to choose whether a piece of content can stay up on the platform or not, without legal ramifications (except in a narrow set of special circumstances like child sex abuse material, court-orders, etc.) This of course reveals the concept of a ‘neutral’ platform for what it is, a myth. Which is why it is important to look at these policies with as critical an eye as possible.
Deepfakes and Synthetic/Manipulated Media
Let’s look at Facebook’s decision to ban ‘deepfakes’ using algorithmic detection. The move is laudable, however, this will not address the lightly edited videos that also plague the platform. Additionally, disinformation agents have modified their modus operandi to use malinformation since it is much harder to detect by algorithms. This form of information disorder is also very common in India.
Twitter’s policy goes further and aims to label/obfuscate not only deepfakes but any synthetic/manipulated media after March 5. It will also highlight and notify users that they are sharing information that has been debunked by fact-checkers. In theory, this sounds promising but determining context across geographies with varying norms will be challenging. Twitter should consider opening up flagged tweets to researchers.
The ‘Supreme Court’ of content moderation
The genesis of Facebook’s Oversight Board was a November 2018 Facebook post by Mark Zuckerberg ostensibly in response to the growing pressure on the company in the aftermath of Cambridge Analytica, the 2016 election interference revelations, and the social network’s role in aiding the spread of disinformation in Myanmar in the run-up to the Rohingya genocide. The Board will be operated by a Trust to which the company has made an irrevocable pledge of $130 million.
For now, cases will be limited to individual pieces of content that have already been taken down and can be referred in one of two ways: By Facebook itself or by individuals who have exhausted all appeals within its ecosystem (including Instagram). And while the geographical balance has been considered, for a platform that has approximately 2.5 billion monthly active users and removes nearly 12 billion pieces of content a quarter, it is hard to imagine the group being able to keep up with the barrage of cases it is likely to face.
There is also no guarantee that geographical diversity will translate to the genuine diversity required to deal with kind of nuanced cases that may come up. There is no commitment as to when the Board will also be able to look into instances where controversial content has been left online. Combined with the potential failings of its deepfakes policy to address malinformation, this will result in a tradeoff where harmful, misleading content will likely stay online.
Another area of concern is the requirement to have an account in the Facebook ecosystem to be able to refer a case. Whenever the board’s ambit expands beyond content takedown cases, this requirement will exclude individuals/groups, not on Facebook/Instagram from seeking recourse, even if they are impacted.
The elephant in the room is, of course, WhatsApp. With over 400 million users in India and support for end-to-end encryption, it is the main vehicle for information disorder operations in the country. The oft-repeated demands for weakening encryption and providing backdoors are not the solution either.
Information disorder, itself, is not new. Rumours, propaganda, and lies are as old as humanity itself and surveillance will not stop them. Social media platforms significantly increase the velocity at which this information flows thereby increasing the impact of information disorder significantly. Treating this solely as a problem for platforms to solve is equivalent to addressing a demand-side problem through exclusive supply-side measures. Until individuals start viewing new information with a healthy dose of skepticism and media organisations stop being incentivised to amplify information disorder there is little hope of addressing this issue in the short to medium term.
(Prateek Waghre is a research analyst at The Takshashila Institution)
Fact-checking alone won't be enough in fight against fake news
Google has recently announced a $1 million grant to help fight misinformation in India. This could not have come at a better time. Misinformation has is a reality and bi-product of the Indian and global information age. It could be Kiran Bedi on Twitter claiming that the sun chants Om or WhatsApp forwards saying that Indira Gandhi entered JNU with force and made the leader of the student’s union, Sitaram Yechury, apologise and resign. As someone who was subject to both these pieces of misinformation, I admit I ended up believing both of them at first, without a second thought. While both of those stories are relatively harmless, misinformation does have an unfortunate history of causing fatalities. For instance, in Tamil Nadu, a mob was guilty of mistaking a 65-year-old woman for being a child trafficker. So when they saw her handing out chocolates to children, they put two and two together and proceeded to lynch the woman. Because of instances like these, and because misinformation has the power to shape the narrative, there is an urgent need to combat it. Countries have already begun to take notice and device measures. For instance, during times when ISIS was a greater force and Russia was emerging as an emerging misinformation threat, the US acknowledged that it was engaged in a war against misinformation. To that end, the Obama administration appointed Richard Stengel, former editor of the TIME magazine, as the undersecretary of Public Diplomacy in the State Department to deal with the threat. Stengel later wrote a book called Information Wars and acknowledged the limitations of the state in providing an effective counter to misinformation through fact-checking. When we try to tackle misinformation, we reason through it based on fundamentally incorrect assumptions. Typically, when we think of misinformation, we picture it to be like this pollutant that hits a population and spreads. Here we imagine that the population misinformation is affecting, is largely passive and homogenous. This theory does not take into account how people interact with the information that they receive or how their contexts impact it. It is a simple theory of communication and does not appreciate the complexities within which the world operates. Amber Sinha elaborates on this in his book, The Networked Public. Paul Lazarsfeld and Joseph Klapper debunked this theory of a passive population in the 1950s. Their argument was that contexts matter. Mass communication and information combined do have the potential to reinforce beliefs, but that reinforcement largely depended on the perception, selective exposure, and retention of information. Lazarsfeld and Klapper’s work is a more sobering look at how misinformation spreads. Most importantly, it tells us why fact-checking doesn’t work. People are not always passive consumers of information. There are multiple factors that significantly impact how information is consumed, such as perception, selective exposure, and confirmation bias. Two people can interpret the same piece of information differently. This is why we see that the media does not change beliefs and opinions, but instead, almost always ends up reinforcing them. So just because people are exposed to facts, does not mean that it is going to fix the problem. I tried to test it myself. To the person who had sent me the story about Indira Gandhi making Sitaram Yechury apologise and resign, I forwarded a link and a screenshot that debunked the forward. To my complete lack of a surprise, they did not respond. Similarly, when Kiran Bedi was told that NASA did not confirm that the Sun sounded like Om, she responded by Tweeting, “We may agree or not agree. Both choices are 🙏”. That makes sense. Remember the last time someone fact-checked you, or just blurted with a statement that went against your worldview. No one likes cognitive dissonance. When our beliefs are questioned, we feel uneasy, our brain tries to reconcile the conflicting ideas to make sense of the world again. It is no fun having your belief system shaken. This brings us back to square one. Misinformation is bad and has the potential to conjure divisive narratives and kill people. If fact-checking does not work, how do we counter it? I do not know the answer to this but would argue that the answer lies in patience and reason. We often think that leading with facts directly wins us an argument. In recent times, I have been guilty of that more often than I would like. But doing that just leads to cognitive dissonance, reconciliation of facts and beliefs, and regression to older values. We need to fundamentally rethink how we are to tackle misinformation. This is why Google’s grant comes at an opportune time. We are yet to see how it will contribute to combating misinformation. While fact-checking is good and should continue, it is not nearly enough to win the information wars.
We need to revise our approach to anonymised data
Data is a complex, dynamic issue. We often like to make large buckets where we want to classify them. The Personal Data Protection Bill does this by making five broad categories, personal data, personal sensitive data, critical personal data, non-personal data, and anonymised data. While it is nice to have these classifications that help us make sense of how data operates, it is important to remember that the real world does not operate this way.
For instance, think about surnames. If you had a list of Indian surnames in a dataset, they alone would not be enough to identify people. So, you would put that dataset under the ambit of personal data. But since it is India, and context matters, surnames would be able to tell you a lot more about a person such as their caste. As a result, surnames alone might not be able to identify people, but they can go on to identify whole communities. That makes surnames more sensitive than just personal data. So you could make a case for them to be included in the personal sensitive category.
And that is the larger point here, data is dynamic, as a result of how it can be combined or used alone in varying contexts. As a result, it is not always easy to pin it down to broad buckets of categories.
This is something that is often not appreciated enough in policy-making, especially in the case of anonymised or non-personal data. Before I go on, let me explain the difference between the two, as there is a tendency to use them interchangeably.
Anonymised data refers to a dataset where the immediate identifiers (such as names or phone numbers) are stripped off the rest of the dataset. Nonpersonal data, on the other hand, is a broader, negative term. So anything that is not personal data can technically come under this umbrella, think anything from traffic signal data to a company's growth projections for the next decade.
Not only is there a tendency to use the terms interchangeably, but there is also a false underlying belief that data, once anonymised cannot be deanonymised. The reason the assumption is false is that data is essentially like puzzle pieces. Even if it is anonymized, having enough anonymised data can lead to deanonymisation and identification of individuals or even whole communities. For instance, if a malicious hacker has access to a history of your location through Google Maps, and can combine that with a history of your payments information from your bank account (or Google Pay), s/he does not need your name to identify you.
In the Indian policy-making context, there does not seem to be a realization that anonymisation can be reversed once you have enough data. The recently introduced Personal Data Protection Bill seems to be subject to this assumption.
Through Section 91, it allows “the central government to direct any data fiduciary or data processor to provide any personal data anonymised or other non-personal data to enable better targeting of delivery of services or formulation of evidence-based policies by the Central government”.
There are two major concerns here. Firstly, Section 91 gives the Government power to gather and process non-personal data. In addition, multiple other sections ensure that this power is largely unchecked. For instance, Section 35 provides the Government the power to exempt itself from the constraints of the bill. Also, Section 42 ensures that instead of being independent, the Data Protection Authority is constituted by members selected by the Government. Having this unchecked power when it comes to collecting and processing data is problematic especially it has the potential to give the Government the ability to use this data to identify minorities.
Secondly, it just does not make sense to address nonpersonal data under a personal data protection bill. Even before this version of the bill came out, there had been multiple calls to appoint a separate committee to come up with recommendations in this space. It would have then been ideal to have a different bill that looks at non-personal data. Because the subject is so vast, it does not make sense for it to be governed by a few lines in Section 91 for the foreseeable future.
So the bottom line is that anonymised data and nonpersonal data can be used to identify people. The government having unchecked powers to collect and process these kinds of data has the potential to lead to severely negative consequences. It would be better instead, to rethink the approach to non-personal and anonymised data and have a separate committee and regulation for this.
This article was first published in Deccan Chronicle.
(The writer is a technology policy analyst at the Takshashila Institution. Views are personal)
Budget and Cybersecurity, a missed opportunity
This article originally appeared in Deccan Chronicle.In the lead-up to the 2020 Budget, the industry looked forward to two major announcements with respect to cybersecurity. First, the allocation of a specific ‘cyber security budget’ to protect the country’s critical infrastructure and support skill development. In 2019, even Rear Admiral Mohit Gupta (head of the Defence Cyber Agency) had called for 10% of the government’s IT spend to be put towards cyber security. Second, a focus on cyber security awareness programmes was seen as being critical especially considering the continued push for ‘Digital India’.On 1st February, in a budget speech that lasted over 150 minutes, the finance minister made 2 references to ‘cyber’. Once in the context of cyber forensics to propose the establishment of a National Police University and a National Forensic Science University. Second, cyber security was cited as a potential frontier that Quantum technology would open up. This was a step-up from the last two budget speeches (July 2019 and February 2019) both of which made no references to the term ‘cyber’ in any form. In fact, the last time cyber was used in a budget speech was in February 2018, in the context of cyber-physical weapons. When combined with other recent developments such as National Security Council Secretariat’s (NSCS) call for inputs a National Cyber Security Strategy (NCSS), the inauguration of a National Cyber Forensics Lab in New Delhi, and an acknowledgement by Lt Gen Rajesh Pant (National Cyber Security Coordinator) that ‘India is the most attacked in cyber sphere’ are signals that the government does indeed consider cyber security an important area.While the proposal to establish a National Forensic Science University is welcome, it will do little to meaningfully address the skill shortage problem. The Cyber Security Strategy of 2013 had envisioned the creation of 500,000 jobs over a 5-year period. A report by Xpheno estimated that there are 67,000 open cyber security positions in the country. Globally, Cybersecurity Ventures estimates, there will be 3.5 million unfilled cyber security positions by 2021. 2 million of these are expected to be in the Asia Pacific region.It is unfair to expect this gap to be fulfilled by state action alone, yet, the budget represents a missed opportunity to nudge industry and academia to fulfilling this demand at a time when unemployment is a major concern. The oft-reported instances of cyber or cyber-enabled fraud that one sees practically every day in the newspaper clearly point to a low-level of awareness and cyber-hygiene among citizens. Allocation of additional funds for Meity’s Cyber Swachhta Kendra at the Union Budget would have sent a strong signal of intent towards addressing the problem.Prateek Waghre is a research analyst at The Takshashila Institution, an independent centre for research and education in public policy.
Data Protection Bill set to bring yet another shock for companies
The debate and protests around the Citizenship Amendment Act and the National Register of Citizens have dominated headlines around the nation, and rightfully so. While public attention and the news cycle continue to revolve around the issue, the Ministry of Electronics and Information Technology (MeitY) has released a Personal Data Protection Bill.
Since reading the bill, Justice B.N. Srikrishna (chair of the committee that drafted the initial report on data protection) has claimed it to have the potential to turn India into an Orwellian State. The statement is based on legitimate grounds, and that should give most people sleepless nights.
The Personal Data Protection Bill does give the government the power to exempt itself from the legislation. It also gives the State significant powers to demand data, and also places significant restrictions on cross-border data flows.
All of this is troubling on multiple levels and is being written about in columns and articles throughout India’s tech policy space. What is not getting enough attention, however, is that the bill is also bad news for the Indian economy, that too when it is the last thing India needs right now.
There are several counts on which the bill, in its current form, will have a negative impact on the economy. Most importantly among them, is the timeline for enforcement. The 2018 version of the Bill, provided for a period for adjustment and compliance before the enforcement of Bill’s provisions. Section 97’s transitional provisions provided industries a period of 18 months before mandating compliance.
Having a defined period of time that affords the industry the space to be in compliance is an objectively good policy. You could have a debate on how long that period should be, but it should be common ground to have a transition plan. For example, Europe’s Data Protection Law, the GDPR, was adopted in April 2016 but was enforced almost 2 years later, in May 2018.
What this tells us is that policy does not work like a light switch. Flicking it on does not always magically make sure that it will have the intended effects. The current version of the Bill does away with a transitional period altogether. This gives any company that collects data no time to adhere to the bill’s requirements. If implemented without a transition period, the bill would provide the government with grounds to penalise companies and impose punishments for not complying with directives that did not exist a day before the bill was introduced. Bangalore, being the hub of the Indian IT sector is likely to be impacted the most, with Mumbai, Hyderabad, and Delhi-NCR in tow.
Not only does the bill offer no transition period, but it also makes it a lot harder to carry out data processing outside of India. If companies want to outsource data processing of personal sensitive data to a different country, they need to do so under an intra-group scheme with the Data Protection Authority (DPA).
There are two things to consider here. Firstly, the DPA will be set up following the bill. Staffing it and providing it with the correct infrastructure and resources could take months from when the bill is enforced. Since there is no transition period, until the DPA is formed, companies who outsource data for processing would legally not be able to do so.
Secondly, even if the DPA is formed, there must be thousands of companies that would want to apply for an intra-group scheme, with new companies forming every month. It would put a lot of undue strain on the DPA to individually assess each company’s proposal and include them in an intragroup scheme.
This redundancy is going to impact small and medium enterprises a lot more than big firms. Big companies are likely to be able to afford to build processing capacity in India or afford costlier versions to maintain their standards. Small and medium enterprises, especially Indian firms, are not always going to have the money to comply within the given timeframe.
On a related note, the bill also creates three tiers of data, personal, personal sensitive, and critical personal data. While the first two are defined within the bill, critical personal data is not. As you would expect, critical personal data is going to be the tier with the most restrictions and burden of compliance.
For instance, while personal and personal sensitive data can be subject to cross-border transfers, critical personal data is not. So it puts any company that deals with data under a lot of anxiety. It would force them to stay in limbo until the third tier is defined, and will have an impact on how they go about their day-to-day business.
The digital economy is inextricably linked with the traditional economy. All of this, removing a runway for compliance, placing redundancy-ridden restrictions on the cross-border flow of personal sensitive data, and not defining critical personal data is bound to have a negative impact on the Indian economy. If the bill is passed in its current form, we are looking at FDI drying up within this sector. Big companies might have deeper pockets, but localisation laws will also go a long way to make sure that they keep their India-bound spending and outsourcing in check. On the other hand, it is also likely to incentivise small companies and startups to register their businesses elsewhere. All of this is coming at a time when the Indian economy needs it the least.
The redundancy is going to impact small and medium enterprises a lot more than big firms. Big companies are likely to be able to afford to build processing capacity in India or afford costlier versions to maintain their standards. Small and medium enterprises, especially Indian firms, are not always going to have the money to comply within the given timeframe.
Rohan is a Policy Analyst at The Takshashila Institution. Views are personal.
This article was first published in Deccan Chronicle.
Does Amazon do more harm than good?
Amid CEO Jeff Bezos’s visit to India, Amazon’s India website displayed a full-page letter highlighting how Amazon was committed to its small and medium scale business partners. Bezos also announced that Amazon will invest an “incremental US $ 1 billion to digitise micro and small businesses in cities, towns, and villages across India, helping them reach more customers than ever before”. However, as Bezos tried to bring on his ‘charm offensive’ to India, stating how he was inspired by the “boundless energy and grit” of the Indian people, not everyone seemed amused. On the one hand, we had the Union Commerce Minister stating that “Amazon is not doing India a favour by investing..it is probably because it wants to cover its losses incurred to deep discounting”, on the other hand, we had small and medium retailers protesting against the visit holding posters of ‘Go Back Amazon’. The retailers claimed that Amazon was doing more damage to their business than good.What is the truth?A typical brick and mortar retailer’s capability to sell is constrained by its access to consumers which in turn is confined by geography. The retailer’s market is restricted to people living in the vicinity of the shop. On the other hand, Amazon offers retailers access to millions of consumers across India. This expansion of the market is not only beneficial to the retailers but also to the final consumers who now have a plethora of products to choose from. However, Amazon, apart from being a marketplace connecting sellers and buyers, is also a player on its own platforms. It sells various products from soaps, shirts, and underwear to tech accessories, and kitchen supplies of its own private label brands such as Solimo, Amazon Essentials, Symbol, Amazon Basics, among others. This violates the neutrality of the platform.Think of the last time you went to the second page of Amazon listings to buy a product. Can’t remember, right? Most of us tend to buy products, especially the standard, and low-value ones from the first five or six listings shown. Amazon has an incentive to and has been accused of favouring its own products above the ones sold by sellers. The reduction in traffic and sales observed by the sellers forces them to buy listing advertisements on Amazon. The protests were a manifestation of the low-bargaining power that individual sellers have against the world’s biggest e-commerce company.Now consider the information that Amazon has in terms of what products are sold where, at what price points, which are the major players in different segments, and so on. Studies show that Amazon uses its marketplace as a tinkering lab and leverages the information asymmetry to launch the most successful products on the platform, under its own label. Once, Amazon’s private-label launches the product, it undercuts the retailers on price and favourably places the products on the website effectively killing competition. The current standard of ‘consumer welfare’ pegged on short-term price effects is inadequate for managing the above results. The de-facto ‘consumer welfare’ standard popularised by Robert Bork through his book, ‘The Antitrust Paradox’ argues that the goal of antitrust laws should be maximising consumer welfare and protecting the competition, not the competitors. Since, there is no clear evidence of Amazon raising prices in the short-term after launching a product, proving consumer harm is difficult. Therefore only considering the consumer welfare standard would be insufficient. As Lina M Khan points out that the structure of companies such as Amazon “create anti-competitive conflicts of interests” and provides opportunities to “cross-leverage market advantages across distinct lines of business.” Also, with Big-Tech companies such as Amazon, backed by ever-flowing streams of venture-capital money, many ill-effects might be seen in the longer term. We should also be cognisant of the fact that sellers are also customers for Amazon. Therefore, consumer welfare should also apply to sellers.As the Competition Commission of India conducts its investigations, it should examine all the new challenges posed by the likes of Amazon and be cautious in its approach and propose a path where the penalties laid down for Amazon are not a slap on the wrist. Instead, the way forward is where healthy competition can be sustained as well as the bargaining power of the sellers on the platforms is increased. This article was originally published in the Deccan Herald.
Technology is set to be the main front in the US-China trade war
Why we need protection from the Data Protection Bill
The Bill, in its current form, more or less tries to hand the government a blank cheque when it comes to accessing citizens’ data.The Ministry of Electronics and Information Technology (MEITY) is set to brief the Joint Parliamentary Committee on the Data Protection Bill on January 14. As MEITY itself has drafted the Bill, it is unlikely that it will suggest major changes. But the hearing is crucial because it has the potential to alter the course of India’s privacy framework.The Bill heavily favours the state. It allows the government to staff the Data Protection Authority (DPA) to be set up under the law; enables the Centre to demand non-personal data and allows for processing of personal data, while also giving the Government the power to exempt any of its agencies from the legislation.There is a lot to discuss but a few issues stand out in relation to the DPA, and the right of the state to access a citizen’s data.Let us begin with the DPA. The Bill has a broad scope and mandate, and once the Parliament passes the bill into law, the DPA’s work will begin. The Bill outlines the DPA’s duty as protection of the interests of data principals (people whose data is in question), prevention of any misuse of personal data, ensuring compliance (with the Act), and promoting awareness about data protection. The first of these duties is interesting as it gives the DPA a broad mandate to act as a representative on behalf of the people and their data.The body will be expected to meet global standards or even better it. It is important that those standards exist and be maintained. India is in a unique position to draft a law on data protection in which it can learn from the experiences of other countries. It is only fair that India adopts a similar or even higher standard for the law.The thing to notice here would be how the DPA is staffed, particularly who the chairperson and six members will be, and how they will be appointed. In its current form, the Bill states that one of the six members should have ‘qualification and experience in law’. However, the need of the hour is to not have senior or retired bureaucrats in the DPA but experts who are acquainted with technology, law, and privacy.The Bill had broadly three trade-offs to manage: Define the powers of the state when it comes to data, set privacy standards around the personal (characteristic, trait, attribute orany other feature used for profiling) and personal-sensitive (financial data, health data, sex life, genetic data) data of citizens and outline the roles and responsibilities of data fiduciaries.The big-ticket item here is that the Bill has heavily favoured the government when it comes to access to data and processing it. There are two reasons why I say that. Firstly, Chapter 3 of the Bill lays out the grounds that allow the government to process personal data for a certain amount of functions. The text of the clauses is fairly broad. For instance, the first clause allows for the processing of personal data for the provision of any service or benefit to a data principal from the state. Although as a proponent of privacy, I am thankful it does not apply to sensitive or critical data and wish it stays that way.Secondly, Chapter 14 gives the state, in consultation with the DPA, the power to demand non-personal or anonymised data from fiduciaries to enable better targeting of services or form evidence-based policy-making. Given the prevailing environment, one could fit a lot of ground under the umbrella of evidence-based policy-making and abuse that provision if it’s not defined well.In all fairness to the Bill, it has tried to formulate checks and balances when granting the executive these powers. Two instances come to mind here. Firstly, in granting powers to demand non-personal or anonymised data, it requires the government to consult with the DPA. But given that the DPA will be structured by people recommended and appointed by the central government, the process may end up being redundant. Secondly, the Bill also puts a check on the DPA when it asks the Authority to “specify the manner in which the data fiduciary or data processor shall provide the information sought, including the designations of the officer or employee of the Authority who may seek such information, the period within which such information is to be furnished and the form in which such information may be provided”. (Chapter 9)In spite of all this, I still think that the Bill more or less tries to hand the government a blank cheque when it comes to access to data. As we head into deliberations around this issue, I would argue that there is a chance that this cheque will get blanker. For people who highly value privacy, the good news is that we still have the landmark Puttaswamy judgement that establishes the fundamental right to privacy under the right to life and personal liberty. Moreover, the regulatory climate is shaping into one where judgement will be needed more than ever. Especially with the government giving itself the powers to access data through the Bill, through recommending and appointing members in the DPA, through allowing agencies to intercept and access data, and through pushing for allowing traceability in communications through amendments to the IT act.The personal data protection Bill is an essential step towards regulating a new space. However, given the draft version available, it also seems to be the beginning of a new tug of war for access to data. Through the bill, the government has the power to push to erode privacy. The Puttaswamy Judgement allows for privacy to be encroached upon if the encroachment has a basis in law, corresponds to a legitimate aim of the state and is proportionate to the objective it seeks to achieve. We are looking at the state’s actions being assessed through these three criteria for months and years to come.(Rohan Seth is a technology policy analyst at The Takshashila Institution)This article was first published in Deccan Herald.
Shutting down internet to curb opposing views is problematic
States around the world are divided along the lines of how they should view the internet. On one end of the spectrum, there are calls to treat the internet somewhat as a fundamental right. For instance, the UN subscribes to this view and is publicly advocating for internet freedom and protection of rights online. On the other end of the spectrum, there is India, where after over a hundred shutdowns in 2019 alone, you could arguably define access to the internet as a luxury.
In my personal opinion, shutting down the internet for a certain area is an objectively horrible thing to do. It’s no wonder that states tend to not take this lightly. Even in Hong Kong, after months of protests, the government felt it okay to issue a ban on face masks in public gatherings. However, when it came to the internet, the government looked at censoring the internet, not shutting it down. The difference is that under censorship, access to certain websites or apps is restricted, but there is reasonable scope for the protesters to contact their families and loved ones. The chronology will tell you that even internet censorship as a measure was considered after weeks of protests.
In the case of India, that is among one of the first things the government does. So when India revoked Kashmir’s autonomy on August 5, 2019, the government shut down the internet the same day. It has been almost 150 days at the time of writing with no news of access to the internet being restored in Kashmir valley. Naturally, people are now getting on trains to go to nearby towns with internet access to renew official documents, fill out admission forms, check emails, or register for exams.
There are multiple good arguments as to why the internet should not be shut down for regions. They cost countries a lot of money once implemented.
According to a report by Indian Council for Research on International Economic Relations, During 2012-17,
16,315 hours of Internet shutdown cost India’s economy around $3 billion, the 12,600 hours of mobile Internet shutdown about $2.37 billion, and the 3,700 hours of mobile and fixed-line Internet shutdowns nearly $678.4 million.
Telecom operators have also suffered because of the Article
370 and the CAA bi-products of the internet shutdown with The Cellular Operators Association of India (COAI) estimating the cost of internet shutdowns being close to `24.5 million for every hour of internet shutdown. Then consider the impact shutting down the internet has on the fundamental right to the freedom of speech and expression and the impact it has on the democratic fabric of our country.
In the case of India, internet shutdowns are also a bad idea because they reinforce the duration of shutdowns and also make themselves more frequent.
Let me explain the duration argument first. Shutdowns tend to happen in regions that are already unstable or maybe about to become so. For better or for worse, the violence and brutality resulting from the instability are captured and shared through smartphones. While those videos/photos may not be as effective as independent news stories, when put on social media they combine to build a narrative. And soon enough the whole is greater than the sum of its parts, creating awareness among people who had little or none before. The problem is that the longer the instability and the internet shutdown lasts, the more ‘content’ there is to build a narrative. In the case of Assam and even more so in Kashmir, this is exactly what has happened. At this point, if the government rescinds the shutdown in either of those places, it faces the inevitable floodgates opening on social media. And the longer this lasts, the more content is going to be floated around.
Secondly, internet shutdowns make internet shutdowns more frequent. After revoking access to the internet a certain number of times, the current administration seems to have developed a model/doctrine for curbing dissent.
Step 1 in that model is shutting down the internet. This has led to shutdowns being normalized as a measure within the government. So it’s no longer a calculated response but a knee-jerk reaction that seems to kick the freedom of expression in the teeth every time it is activated.
The broader point here is that taking away the internet is an act of running away from backlash and discourse.
To carry it out as an immediate response to protests is in principle, turning away from the democratic value of free speech. It’s hard to believe that it may be time for the world’s largest democracy to learn from Hong Kong (a state which uses tear gas against its people and then tries to ban face masks) when it comes to dealing with protesters.
(The writer is a technology policy analyst at the Takshashila Institution.)
This article was first published in Deccan Chronicle.
Amazon, Fine Margins, and Ambient Computing
There are some keynotes in the tech world that serve as highlights of the year. There is Apple’s iPhone event and WWDC where Apple traditionally deals with software developments. Then there is Google’s IO, and also the Mobile World Congress. Virtually all of these are guaranteed to make the news. Earlier last year, it was an Amazon event that captured the news (outshining Facebook’s Oculus event that was held on the same day in the process).During the event, Amazon launched 14 new products. By any standards, that is a lot of announcements, products, and things to cover in a single event. And so it can be a bit much to keep up with and make sense of what’s happening at Amazon. The short version of the developments is that Amazon is trying to put Alexa everywhere it possibly can. It’s competing with Google Assistant and Siri, as well as your daily phone usage. It wants you to check your phone less and talk to Alexa more.It would explain why Amazon has launched ‘echo buds’. They have Bose’s ‘Noise Reduction Technology’ and are significantly cheaper than Apple’s Air Pods. There is also an Amazon microwave (also cheaper than its competition), as well as Echo Frames, and an “Alexa ring”, called Loop. The Echo speaker line has also been diversified to suit different pockets (and has also included a deepfake of Samuel Jackson’s voice, which is amusing and incentive enough to prefer Alexa over other voice assistants unless competition upstages them). Amazon launched a plug-in device called Echo Flex (which seems to be ideally suited for hallways, in case you want access to Alexa while going from one room to another and are not wearing your glasses, earphones, or ring). Aside from a huge number of available form factors in which they can put Alexa in, the other thing about these products is how they are priced. You could make the argument that the margins are so little that the pricing is predatory (a testament to what can be accomplished when one sacrifices profit for market share). Combine that with how they will be featured on Amazon’s website and you can foresee decent adoption rates, not just in the US, but also globally should those products be available.In the lead-up to the event, Amazon also launched a Voice Interoperability Initiative. The idea is that you can access multiple voice assistants from a device. Notably, Google Assistant and Siri are not part of the alliance, but Cortana is. You can check out a full list here. The alliance is essentially a combination of the best of the rest. It aims to compensate for the deep system integration that Alexa lacks but Google Assistant and Siri have on Android and iOS devices.Besides making Alexa more competitive, the broader aim for the event is to make Amazon a leader in ambient computing. Amazon knows that it is going to be challenging to have people switch from their phones to Alexa and so likely wants marginal wins (a practice perfected in-house). That’s why so many of their announced products are concepts, or ‘day 1’ products available on an invite-only basis. The goal is to launch a bunch of things and see what sticks and feels the most natural to fit Alexa in so that they can capitalize on it later.It is Amazon’s job to make a pitch for an Alexa-driven world and try to drive us there through its products and services, but not enough has been said about what it might look like once we are in it. An educated guess is that user convenience will eventually win in such a reality. As will AI, with more data points coming in for training. This is likely to come at a cost of privacy depending on Amazon’s compliance with data protection laws (should they become a global norm).To be fair to Amazon, the event had some initial focus on privacy which then shifted to products. However, the context matters. For better or worse, these new form factors are a step ahead in collecting user data. Also, the voice interoperability project might also mean that devices will have multiple trigger words and thus, more accidental data collection. To keep up with that, Amazon will need to improve its practices on who listens to recordings and how.Amazon’s event has given us all things Alexa at very competitive rates, which sounds great. If you are going to take away one thing from the event, let it be that Amazon wants to naturalise you talking to Alexa. Its current strategy is to surround you with the voice assistant wrapped in different products. If it can make you switch to talking to Alexa instead of checking your phone, or using Google Assistant or Siri even 4 times a day, that is a win they can build on.
Why missed call democracy is a bad idea
The Narendra Modi-led government launched a ‘missed call campaign’ on January 3, 2019, asking people to give a missed call at a number to register their support for the controversial Citizenship (Amendment) Act. Home Minister Amit Shah has claimed that 52,72,000 missed calls have been received from verifiable phone numbers.
What has been happening in the background since the launch of the campaign is a reflection of the state of affairs in the country. Ever since the campaign started, Twitter has been abuzz with misleading tweets asking people to call the number by promising ‘job offers’, ‘free Netflix subscription’, ‘romantic dates with women in the area’, and so forth. Tweets such as ‘Akele ho? Mujhse dosti karoge?’ (Feeling lonely? Want to be friends?) by a Twitter account with 16k followers, Prime Minister Modi being one amongst them, point to a much larger misinformation campaign presumably by the IT-cell of the ruling party. A counter-campaign was also launched soliciting missed calls to demonstrate opposition to CAA and NRC.
Where’s my number?
In the age of surveillance capitalism, any entity, especially the government, running a campaign to garner support using phone numbers opens up private individuals to grave risks. The people who are calling the toll-free number have no information on whether their numbers would be stored in a database, shared with third parties, and/or used for a future campaign by the government. First-principles of privacy dictates that data collected should be proportionate to the legitimate aim and limited purpose that is being pursued. Furthermore, the data principal should provide informed consent to the collection of data.
There seem to be no means for citizens to determine if the government is storing their data, and no process to get their records deleted if they wish to. Repurposing the potential database to micro-target during election campaigns is a severe threat that emerges from this exercise. People who called the number are either staunch supporters of the Bharatiya Janata Party (BJP) or vulnerable youth who fell into the honeytrap while looking for jobs, subscription TV, or romantic partners. Given that the government now potentially has access to members of its core voter base as well as gullible people at the margins, it can push information and opinions that favour its ideology. Alternatively, participants in the counter-campaign can be categorised as anti-establishment voices. This narrative dominance, empowered by personalisation algorithms, can result in the formation of filter bubbles where people are isolated from conflicting viewpoints, reinforcing their existing beliefs.
The design of the missed call campaign itself is flawed. An honestly designed campaign would have provided options to vote either for or against an option. The absence of a way to express an opposing view reduces it to an exercise in confirmation bias. The missed call mechanism is also susceptible to manipulation. It is unclear whether these are features or bugs. While 52 lakh may seem like a sizable number, it is a drop in the ocean in a country of more than 130 crore people. In fact, the number is less than 3 per cent of the total BJP membership of 18 crore people.
Why referendums fail
If this approach to engage with citizens is legitimised, it opens the door to use it every time there is a risk of backlash over a government decision. Even before Brexit became the poster-child for failed referendums, political theorists had advised against them. When asked about the best time to use referendums, Michael Marsh, a political scientist at Trinity College, Dublin was quoted as saying ‘almost never’.
In Democracy for Realists, political scientists Christopher Achen and Larry Bartels, lament the idea that the ‘only possible cure for the ills of democracy is more democracy. They cite a body of research that concludes that citizens often do not have the necessary knowledge, nor the inclination to acquire it when it comes to voting on nuanced issues. Decisions are often made on short-term considerations like personal tax saving or reduction in government expenditure without an analysis of anticipated unintended consequences. Additionally, there is a tendency for referendum processes to be captured by certain interest groups and typically decided in favour of whichever has deeper pockets. Low-effort voting methods, such as online voting and missed calls, are likely to be overused. This will result in desensitisation of the public, exacerbating all the shortcomings of referendums.
The use of missed calls to vindicate its stand on contentious issues, by a democratically elected government, is not only ineffectual, but it also exposes unsuspecting individuals to severe risks. Employing systems without basic privacy considerations, clear purpose limitations, and straightforward redressal mechanisms, can lead to misuse in the future and undermine the democratic ethos of the nation.
Will India follow Russian example on domestic internet?
After Russia tested RuNet, what are the chances that India will try its hand at NayaBharatNet?
In the final weeks of the last year, there were reports that Russia successfully tested RuNet, its ‘domestic internet’ that would be cut off from the global internet. Specifics of the exercise are not known – whether for example, it was really successful and what challenges it faced – but it made for an ominous end to a decade that has been marked by a growing disillusionment with the concept of the internet as a liberating force.
This was always on the cards when Russia and China started working together in the lead-up to the former’s Yarovaya law, which imposed geographical restrictions on the transfer of Russian users’ data. In December 2019, Russia had also passed a law making it mandatory for devices sold in the country to be embedded with Russian apps from July 2020. While it does not specify which devices and apps are covered, critics of the law are concerned that its vague nature opens the door for it to be misused to force the installation of spyware.
Russia is not alone in this quest though, China is the pioneer, and others like North Korea and Iran are along for the ride as well. After a week-long nationwide internet shutdown in response to protests and an exercise by government officials to collate critical ‘foreign’ websites sparking speculation about the creation of a ‘whitelist’ of allowed sites, Iran’s National Intranet Network (NIN) is once again in the spotlight. This was followed by a statement from President Rouhani that the network was being strengthened so that people will not need foreign networks to meet their needs. North Korea too has a tightly controlled domestic internet, Kwangmyong, whose content is largely controlled by the state.
China’s great firewall (GFW) has been around for over a decade and is not a unitary system as it is often made out to be. It uses a combination of manual and automated techniques to block global content but largely works on the principle of blacklisting unwanted websites/content. Many international websites do work but are extremely slow because of the scanning and filtering that inbound internet traffic to the country is put through. For a website to operate from inside Mainland China, a number of local permits are required depending on the industry. Much of the internet backbone is state-controlled. It has continued to tighten the noose through a combination of restrictive regulation and stricter interpretation of existing rules.
A highly restrictive Cybersecurity law passed in 2015 called for mandatory source code disclosures. In 2016, working with ISPs it set out specifications for an Information Security Management System that aimed to automate the ability of provincial authorities to monitor/filter internet traffic. In 2017, it tweaked licensing rules to ensure that permits would only be issued to domains that are registered to a Mainland China-based company. The extent to which these rules are enforced may vary, but it leaves a ‘Sword of Damocles’ in the state’s toolkit that it can drop whenever it chooses to do so. By constantly increasing the costs of doing business for non-Chinese companies, it has achieved ‘chinternet’ without explicitly cutting the cord – yet.
Fears of a ‘splinternet’ along national boundaries or ‘balkanisation’ of the internet are not new. But the likelihood is now higher than ever before as governments try to take control over cyberspace after ceding space in its early years. Research by the Oxford Internet Institute and Freedom House which have revealed the use of disinformation campaigns and the co-option of social media for manipulation and surveillance by various governments. The United Nations General Assembly passed a resolution in support of a Russia-backed Open-Ended Working Group (OEWG) which has drawn criticism from others on the ground that it prioritises cyber sovereignty and domestic control of the internet over human rights. Countries that advocate a free-and-open internet are in a bind over whether to participate in the group or cede control in the global norm-setting process. Continued passage of regulation by various countries that have extraterritorial application will fragment the internet and strengthen the constituency favouring cyber sovereignty.
‘NayaBharatNet’ a possibility?
India has yet to articulate its position on some of the divisive issues concerning global norms in cyberspace, yet it has repeatedly stressed the principle of cyber sovereignty positioning it alongside the Sino-Russian camp. While it seems to have softened its position on data localisation, for now, similar rhetoric about national sovereignty and security has been used by Russia and China in the past.
Authoritarianism by the Indian state is also surely on the rise – events that unfolded in 2019 provide ample empirical evidence for this. The fact that various police departments are proactively taking to social media channels to threaten/deter posts that run contrary to the state’s narrative (Is this confirmed about police depts?) and the frequent use of internet shutdowns show that the desire to control the internet is extremely strong. International criticism has repeatedly been portrayed as mischief by a ‘foreign hand’. The creation of a strictly regulated domestic digital echo chamber is not unimaginable in this context. In fact, it is a logical next step as the current tactics are bound to have diminishing returns over time.
Today, the economy (political or otherwise) for such a move does not exist. The IT Industry
would obviously vigorously oppose it. And unlike China, the telecommunication backbone infrastructure is not state-owned, but the sector as a whole is probably the weakest it has ever been and tending towards a monopoly/duopoly. It also has a history of being regulated with a heavy hand.
Until now, India has followed a policy of denying cyber intrusions or claiming that no significant harm was done. However, in the aftermath of ‘undeniable’ real-world harm inflicted by a cyber attack, the Overton window could move towards supporting such an initiative for national security and could very well be exploited. Sometime in the not-so-distant future, we could all be communicating using Kimbho on NayaBharatNet.
(Prateek Waghre is a research analyst at The Takshashila Institution)
This article was originally published in Deccan Herald.
Disney Should Buy Spotify
You may think that winning the streaming race depends on having the best content, but things have already begun to change. As of now, the company with the better bundle will win, and that’s why it makes sense for Disney to buy Spotify this year.To read the full article, visit OZY.Rohan is a technology policy analyst at The Takshashila Institution.
Your Fitbit is Going to Replace Clinics near You
First, it was payments and now it’s healthcare. Big Tech in the US and China is revolutionising the health sector, with hundreds of billions of dollars of market share at stake. There are multiple factors that are driving this movement. For starters, there’s the simple need to find new avenues for growth for both American and Chinese tech giants, and there are only so many trillion-dollar industries to disrupt to add shareholder value. China has more reasons and more at stake here. Both countries boast of high levels of internet penetration and smartphone use. Both the US and China are rapidly aging societies. This implies a growing geriatric healthcare burden and creates incentives for new alternatives to overcrowded hospitals. Both are home to a wealthy middle class, which is seeking better health solutions. According to Royal Philips’ Future Health Index 2019, both the US and China are global frontrunners in terms of adoption of digital health technology, with a large number of medical professionals and consumers relying on tools for self-monitoring and online consultations. This is a key contributor to their rise in demand for wearables. This is supported by and fuels their dynamic and thriving innovation ecosystems. This explains why American and Chinese companies are making moves in healthcare based on their core competencies. Recently, Amazon backed on its software to move into telemedicine and also invited healthcare companies to build tools on Alexa’s platform. Amazon’s core competence, however, is its efficiency in distribution networks. So the e-commerce giant acquired Pillpack, an online pharmacy. The Alibaba Group, on the other hand, entered the healthcare game early with its TMall Pharmacy in 2015. However, in 2018, Alibaba consolidated its healthcare assets, including medical devices, e-appointments, drug purchases, and delivery services under the banner of Alibaba Health, which leverages the group’s advantages in data processing and e-commerce. Another big Chinese player in the field is Tencent, which owns WeDoctor, one of the world’s biggest health tech start-ups. Google is great at data analytics and OS development. Keep that in mind and Project Nightingale begins to make sense. As does Google’s $2.1 billion acquisition of Fitbit. Google’s Chinese search counterpart Baidu has bounced back after a 2016 controversy over healthcare ads to explore the possibility of leveraging artificial intelligence and blockchain technology for its medical data sharing and distribution solution. Meanwhile, Apple excels in devices that track wellness. Think Apple Watch and the electrocardiogram that comes installed on it. Or the dedicated carekit and researchkit open-source frameworks that Apple has been pushing recently for developers. IDC data for 2018 show that while Apple is the market leader in the wearables segment, Chinese firms Xiaomi and Huawei take the second and third spots, respectively. Their global ranking is buttressed by their dominance in the Chinese and Indian markets. So what does the future of the health tech sector look like? We predict three scenarios that we believe will play out over the next five years: First, wearables will become the new OPDs: With Big Tech investing in healthcare across Silicon Valley, Zhongguancun, and Shenzhen wearables and telemedicine have a bright present and future in their diagnostic capabilities. Recording pulse or temperature, scanning bones or tissues, diagnosing based on those, and getting medicines have become or are becoming tasks that can be worked upon remotely or be delivered to you. Over the coming decade, wearables will reliably send accurate data in real-time to process for millions of people. This would give them a decisive advantage over the number of people physical OPDs can carter to, making the latter obsolete. Second, tech giants will dominate health & life insurance: Wearables and smartphones are becoming increasingly sophisticated in diagnostic capabilities and tracking. As that continues to happen with every new iteration of FitBits and the Apple Watches, the OS becomes a platform for companies to sell services and gain revenue. WatchOS and WearOS (and/or what future FitBit OS is going to be called), are likely to go on to sell insurance through their devices. Whether Google/Apple curate a new insurance policy or end up acquiring an insurance company to do it for them is irrelevant. Considering that insurance is a lucrative market, and that data from the apps in the OS gives Google/Apple a comparative advantage means that it is the matter of when, not if, for both tech giants to start peddling their own insurance through the OS on smartphones or wearables. Third, Sino-US rivalry will stymie health tech’s future growth: The deepening strategic rivalry between the US and China has already shifted from competition over trade policies to a battle for technological supremacy. This is playing out in the form of expanding the definition of sensitive technologies that must be protected, tighter security reviews of Chinese tech investments, undoing of completed acquisitions, blacklisting of certain firms, export restrictions and a contest for foreign markets and data streams. Much of this is captured in the geopolitically charged discourse over Huawei and 5G. The health tech industry can expect a similarly rocky future. Collaboration between research communities and business entities across the Pacific will be difficult. Acquisitions in foreign markets are likely to become a politically polarising decision. Capital flows into each other’s health tech ecosystems will become increasingly constrained. Data will become the biggest sticking point, with most states preferring some form of localization.
Can Modi govt know who you text? Should FB be liable for your posts? We’ll know in Jan 2020
Apart from deciding on end-to-end encryption for chats, the amended IT Rules will also decide on what content belongs on the internet.
Should Facebook be liable for the content you post? Should Apple build a backdoor to allow access to iPhones? Should the government know who you are texting and should it have access to your messages? On 15 January 2020, the amendments to India’s IT Rules will answer these questions by finalising the intermediary guidelines.That is also one of the reasons why over the course of 2019, we have talked about whether the government of India should be allowed to break end-to-end encryption. Of course, the topic gained traction after the November Pegasus WhatsApp hack reports. And the Narendra Modi government said the law allows it to intercept and monitor digital content in the public interest.The problem with this whole encryption debate is that it takes up a disproportionate amount of mind space. Don’t get me wrong; encryption is a vitally important issue. However, it is not the only issue that will be covered by the IT amendments.The January amendments will also decide on these crucial issues.Also, we will use the words intermediary and platform interchangeably. But for context, a platform is an online service like Facebook or Twitter, while intermediary includes platforms, the servers they are hosted on, and even the cybercafé you might access the platform through.How many users before a company needs an office in India?According to the proposed amendments, any intermediary with over 50 lakh users will need to:
- Have a permanent registered office in India
- Appoint a nodal point of contact for the government
- Be included in the Companies Act
This may read fine at first glance. But take another look. Users as a term is vague. Monthly active users? Daily active users? Registered users? You might have an account on Pocket, but never end up using it. Does that mean Pocket now needs to have an office in India and appoint a person in charge of talking to the government on the off chance that 50 lakh people one day decide to use the app?The other thing here is how does the government keep track of the number of intermediaries who have included a nodal point of contact? Apps do not notify the government before they are made available to the people. Instead, they show up on the App Store/Play Store, ready to be used. And how would the government even know when an intermediary has crossed 50 lakh users? Should all intermediaries make their user stats public or release a notification when they meet the threshold?Clearly, these guidelines were drafted just keeping in mind Facebook and WhatsApp. However, they will have anticipated but unintended consequences as far as smaller firms are concerned.
Also read: The rise of Pegasus and why India should know the problem with hiring ‘internet mercenaries’
What content belongs on the internet?
The intermediary guidelines also talk at length about content takedowns and what should and should not be allowed to remain on the internet. You could say that the Modi government has written itself a blank cheque in being able to dictate this. Here are just some of the grounds on which companies may be asked to remove content:
- In violation of decency and morality
- Public order
- Impacts the sovereignty and integrity of India
- Security of state
- Friendly relations with the foreign states
- In relation to contempt of court
- Defamation or incitement to offence
- Defamatory
- Obscene
- Pornographic
- Paedophilic
- Hateful
- Harassing
- Blasphemous
A lot of these make sense. We as a society have a consensus that child porn, hate crimes, and videos against animal cruelty do not belong on the internet. The government also has every right to argue that content that impacts its security and relations with other states should be taken down. But look at some of the other grounds. Who decides what content is defamatory or blasphemous? For instance, comedy at the expense of someone or something can end up disparaging the subject. Does that mean comedy does not belong on the internet? You could argue a similar case for memes, documentaries, and blogs. Based on these grounds, anything that the government of the day doesn’t like can be taken down.Should we have a best-efforts approach to aiding law enforcement?Remember the anticipated but unintended consequences? Well, not all intermediaries have the same access to user data. A cloud service provider does not have the same power as a multi-million user platform. So, when law enforcement goes asking for information, they also take into account the asymmetries that exist within the ecosystem.A best-efforts approach will make sure that requests do not make cloud service providers or even cybercafés liable for sharing data they don’t have access to. Because, if at the end of the day, a request is not technically feasible, all it does is ensure that the matter will be taken to court to place undue stress on the intermediary.As for whether or not the government should break encryption, I’d strongly recommend against it. Internet shutdowns are bad enough. Imagine if we lived in a world where the government could learn about who you text and what you may be talking about. Recently, American WeChat users were banned for celebrating the Hong Kong election results. Similar instances could end up happening in India and at scale, that could end up being a threat to democracy unlike any we have seen before. To that end, watch out for the guidelines on 15 January, they could set the tone for the rest of the year.Rohan Seth is a Policy Analyst with the Technology and Policy Programme of The Takshashila Institution. Views are personal.This article was first published in The Print.
PLA SSF: Why China will be ahead of everyone in future cyber, space or information warfare
People’s Liberation Army Strategic Support Force contingent made its debut appearance at China’s military day parade, earlier this year. Formed on this day in 2015, it is mandated to create synergies between China’s space, cyber and electronic warfare. The PLA considers these three domains critical for “commanding strategic heights.” The SSF was formed to optimise China’s dominance in these three domains and also contribute to enhancing the PLA’s broader goals of strategic deterrence and integration for information warfare. Read more...
India’s National Cybersecurity Policy Must Acknowledge Modern Realities
This article originally appeared in The Diplomat
Look at the numbers: Why Digital India can’t afford internet shutdowns with slowing economy
Take a look at these numbers – 3, 5, 6, 14, 31, 79, 134, 91. These are the numbers of documented instances of internet shutdowns in India between 2012 and 2019. The 2019 number will certainly rise during the final weeks of the year as anger against the Citizenship (Amendment) Act and the Bharatiya Janata Party rises.
And yet, as internet shutdowns are reported in Meerut, Aligarh, Malda, Howrah, Assam, Nagaland, one wonders if Narendra Modi government really thinks it can help assuage anger and old resentments.
World over, protesters have always found a way out of any clampdown. In Hong Kong, protesters are using Bridgefy, a service that relies on Bluetooth, to organise.And yet, all governments, irrespective of whether it is the Congress or the BJP or any other party, keep using internet shutdowns as a kill switch. But tech stops for no one. It’s time India thinks beyond shutdowns.
A new era
In almost all cases, mobile internet services were shutdown. For four of the last five years, more than half of these shutdowns have been ‘proactive’ in nature. They have been imposed based either on Section 144 of the CrPC or The Temporary Suspension of Telecom Rules issued by the Ministry of Communications under the NDA government in 2017. While an appeal against the use of the former was struck down by the Supreme Court in 2016, the latter suffers from a lack of transparency and was passed without any consultation with citizens, who are directly affected. Through RTI requests it has also been revealed that many instances of internet shutdowns go undocumented and due process is not always followed.
The willingness and urgency on display to snap communication lines is worrying, especially in ‘Digital India’. Considering that 97 per cent of the estimated 570 million internet users use at least a mobile device access to access the internet, and the growing reliance on connectivity for communication and commerce, this is a severely disproportionate measure. Various studies have pegged the cost of these disruptions from 0.4-2 per cent of a country’s daily GDP to $3 billion for India over a 5-year period ending in 2017.
Since 2017, India has witnessed nearly twice as many shutdowns. Even so, until mid-2019, internet shutdowns predominantly affected parts of Rajasthan and Jammu and Kashmir, both accounting for nearly 250 instances. More importantly, they were rarely imposed in urban centres. In August 2019, a new era began unfolding. First the ongoing internet shutdown in the region of Jammu and Kashmir is the widest sustained disruption ever documented. Second, on the day of the Supreme Court Ayodhya verdict, proactive internet shutdowns were in operation in Aligarh, Agra and Jaipur, signalling a shift in the willingness to deploy them in urban centres. And finally, with ongoing protests against the Citizenship (Amendment) Act, reports have been coming in about internet disruptions in Assam, Tripura, multiple districts in West Bengal, Aligarh and Meerut in Uttar Pradesh, cementing the use of internet shutdowns as the tool of choice.
Diminishing returns
The framework of Radically Networked Societies (RNS) can be used to understand the interplay between protesters and the state. An RNS is defined as a web of connected individuals possessing an identity (real or imagined) and having a common immediate cause. The internet as a medium provided them the ability to scale faster and wider than ever before.With measures like internet shutdowns and curfews, the state aims to increase the time it takes for them to mobilise by restricting information flows. However, such methods are bound to have diminishing returns over time.Snapping communication lines will do little to quell genuine resentment and may conversely encourage people to take to the streets and violate curfews, thereby increasing chances of escalation. Mesh networking apps that operate without internet connectivity will eventually make their way into the toolkit of Indian protesters, like they did in the Hong Kong protests, rendering the argument of shutdowns as an ‘online curfew’ moot.
Better than shutdowns
The Indian State must evolve beyond the use of internet shutdowns. Instead, it should look to address the causes and reduce the time it takes to counter mobilise. There have been some instances of state authorities trying different approaches.In September 2016, when there were protests in Bengaluru over the Cauvery water sharing judgment, instead of shutting down the internet the Bengaluru Police took to Twitter to dispel misinformation and rumours proactively. In the days leading up to the Ayodhya verdict, several police departments were proactively monitoring social media for objectionable messages. While this did not function smoothly on the day of the verdict since the police went on an excessive case registering spree, the Bengaluru example shows that it can work. Future capacity building and training cyber personnel to specifically counter flows of misinformation online must be a consideration going forward.The reaction to viral hoax messages circulating before the Ayodhya verdict warning of surveillance also produced some interesting insight. While more surveillance is never the answer, alternate ways of promoting responsible behaviour should be explored. This could range from encouraging fact-checking of information to political leaders leading by example and not encouraging abusive trolls, misinformation flows themselves. Conflict and polarisation as engagement must be actively discouraged.
Another important step is to counter dangerous speech in society. Research has shown that misinformation/disinformation does not only circulate during specific events. Conditions that exacerbate such flows already exist in society. While the state alone cannot do this, it must nudge the people towards countering it. Such measures must be articulated in the upcoming National Cybersecurity Policy.
Ultimately, that the world’s largest democracy is by far the world leader of such disproportionate tactics should be reason enough for the Indian state to rethink the use of internet shutdowns. But if that doesn’t suffice, the realisation that they come with an expiry date should spur it into fixing the underlying problems unless it wants to live with the diminishing returns that incentivise escalation.The author is a Research Analyst at The Takshashila Institution’s Technology and Policy Programme. Views are personal.This article originally appeared in ThePrint.in