Commentary
Find our newspaper columns, blogs, and other commentary pieces in this section. Our research focuses on Advanced Biology, High-Tech Geopolitics, Strategic Studies, Indo-Pacific Studies & Economic Policy
Here’s Why Facebook Should Collect Data on Our Political Leanings
As a global community, we should have a more visible and informed choice in what content we want to consume.The full article is available here.Rohan is a Policy Analyst at The Takshashila Institution.
Lessons from Facebook and Twitter's Political Ads Policies
Over the course of the last few weeks, we have seen Facebook and Twitter take opposing views on the issue of political ads. While the issue itself does not have an immediate implication for Indian politics, the decisions of the two companies, their actions throughout the episode and reactions to them are emblematic of the larger set of problems surrounding their policies. They serve as a reminder that we should not expect these platforms to be neutral places in the context of public discourse solely through self-regulation.
In late October, Facebook infamously announced that it would not fact-check political ads. Shortly after that, Twitter’s CEO Jack Dorsey announced via Twitter that the company would not allow any political ads after November 22. And though Twitter is not alone in this approach, its role in public discourse differs from other companies like LinkedIn, TikTok etc. that already have similar policies. Google is reportedly due to announce its own policy soon. At face-value, it may seem that one of these approaches is far better than the other, but a deeper look brings forth the challenges both will find hard to overcome. Google, meanwhile, announced a new political ads policy on November 20. Its policy aims to limit micro-targeting across search, display and YouTube ads. Crucially, it reiterated that no advertisers (political or otherwise) are allowed to make misleading claims.
Potential for misuse
To demonstrate the drawbacks of Facebook’s policy, US lawmaker Elizabeth Warren’s Presidential campaign deliberately published an ad with a false claim about Facebook CEO Mark Zuckerberg. In another instance, Adriel Hampton, an activist, signed up as a candidate for California’s 2022 gubernatorial election so that he could publish ads with misleading claims (he was ultimately not allowed to do so).
While Twitter’s policy disallows ads from candidates, parties and political groups/ political action committees (PACs), Facebook claims it will still fact-check ads from PACs. For malicious actors determined to spread misinformation/disinformation through ads, these distinctions will not be much of an impediment. They will find workarounds.
While most conversation has been US-centric, both companies have a presence in over 100 countries. A significant amount of local context and human-effort is required to consistently enforce policies across all of them. The ongoing trend to substitute human oversight with machine learning could limit the acquisition of local knowledge. For e.g. does Facebook's policy of not naming whistle-blowers work in every country it has a presence in?
Notably, both companies stressed how little an impact political ads had on their respective bottom-lines. Considering the skewed revenues per user for North America + Europe compared with Asia Pacific + rest of the world, the financial incentive to enforce such resource-intensive policies equitably is limited. Both companies also have a history of inconsistent responses to moral panics resulting in an uneven implementation of their policies.
A self-imposed ban on political ads by Facebook and Twitter in Washington to avoid dealing with complex campaign finance rules has resulted in uneven enforcement and a complicated set of rules that have proven advantageous to incumbents. In response to criticism that these rules will adversely impact civil society and advocacy groups, Twitter initially said ‘cause-based ads’ won’t be banned and ultimately settled on limiting them by preventing micro-targeting. Ultimately, both approaches are likely to favour incumbents or those with deeper pockets.
Fixing Accountability
The real problems for Social Media networks go far beyond micro-targeted political advertising and the shortcomings across capacity, misuse and consequences apply there as well. The flow of misinformation/disinformation is rampant. A study by Poynter Institute highlighted that misinformation/disinformation outperformed fact-checks by several orders of magnitude. Research by Oxford Internet Institute and Freedom House has revealed the use disinformation campaigns online and the co-option of social media to power the shift towards illiberalism by various governments. Conflict and toxicity now seem to be features meant to drive engagement. Rules are implemented arbitrarily and suspension policies are not consistently enforced. The increased usage of machine learning algorithms (which can be gamed by mass reporting) in content moderation is coinciding with the reduction in human oversight.
Social Media networks are classified as intermediaries which grants them safe-harbour, implying that they cannot be held accountable for content posted on them by users. Intermediary is a very broad term covering everything from ISPs, Cloud services to end-user facing websites/applications across various sectors. Stratechery, a website which analyses technology strategy, proposes a framework for content moderation such that both discretion and responsibility is higher the closer a company is to an end-user. Therefore, for platforms like Facebook/Twitter/YouTube etc. there should be more responsibility/discretion than ISPs/Cloud services providers. It does not explicitly call for fixing accountability, which cannot be taken for granted.
Unfortunately, self-regulation has not worked in this context and their status as intermediaries may require additional consideration. Presently, India’s proposed revised Intermediary Guidelines already tend towards over-regulation to solve for the challenges posed by Social Media companies, adversely impacting many other companies. The real challenge for policy-makers and society in countries like India is to strike the balance between holding large Social Media networks accountable while not creating rules that are so onerous they can be weaponised into limiting freedom of speech.
(Prateek Waghre is a Technology-Policy researcher at Takshashila Institution. He focuses on the governance of Big Tech in Democracies)
This article was originally published on 21st November 2019, in Deccan Herald.
Govt needs to be wary of facial recognition misuse
India is creating a national facial recognition system. If you live in India, you should be concerned about what this could lead to. It is easy to draw parallels with 1984 and say that we are moving towards Big Brother at pace, and perhaps we are. But a statement like that, for better or worse, would accentuate the dystopia and may not be fair to the rationale behind the move. Instead, let us sidestep conversations about the resistance, doublethink, and thoughtcrime, and look at why the government wants to do this and the possible risks of a national facial recognition system.
WHY DOES THE GOVERNMENT WANT THIS?
Let us first look at it from the government’s side of the aisle. Having a national facial recognition database can have a lot of pros. Instead of looking at this like big brother, the bestcase scenario is that the Indian government is looking at better security, safety, and crime prevention. It would aid law enforcement. In fact, the request for proposal by the National Crime Records Bureau (NCRB) says as much, ‘It (the national facial recognition system) is an effort in the direction of modernizing the police force, information gathering, criminal identification, verification and its dissemination among various police organizations and units across the country’.
Take it one step further in a world where later down the line, you could also use the same database to achieve gains in efficiency and productivity. For example, schools could have attendance based on FaceID-like software, or checking for train tickets would be more efficient (discounting the occasional case of plastic surgery that alters your appearance significantly enough).
POTENTIAL FOR MISUSE
The underlying assumption for this facial recognition system is that people implicitly trust the government with their faces, which is wrong. Not least because even if you trust this government, you may not trust the one that comes after it. This is especially true when you consider the power that facial recognition databases provide administrations.
For instance, China has successfully used AI and facial recognition to profile and suppress minorities. Who is to guarantee that the current or a future government will not use this technology to keep out or suppress minorities domestically? The current government has already taken measures to ramp up mass surveillance. In December last year, the Ministry of Home Affairs issued a notification that authorized 10 agencies to intercept calls, data on any computer.
WHERE IS THE CONSENT? Apart from the fact that people cannot trust all governments across time with data of their faces, there is also the hugely important issue of consent and absence of legality. Facial data is personal and sensitive. Not giving people the choice to opt-out is objectively wrong.
Consider the fact that once such a database exists, it is will be combined with state police across the country, it says as much in the proposal excerpt mentioned above. There is every chance that we are looking at increased discrimination in profiling with AI algorithms repeating the existing biases.
Why should the people not have a say in whether they want their facial data to be a part of this system, let alone whether such a system should exist in the first place?
Moreover, because of how personal facial data is, even law enforcement agencies should have to go through some form of legal checks and safeguards to clarify why they want access to data and whether their claim is legitimate.
Data breaches would have worse consequences
Policy, in technology and elsewhere, is often viewed through what outcomes are intended and anticipated. Data breaches are anticipated and unintended. Surely the government does not plan to share/sell personal and sensitive data for revenue. However, considering past trends in Aadhaar, and the performance of State Resident Data Hubs goes, leaks and breaches are to be expected. Even if you trust the government to not misuse your facial data, you shouldn’t be comfortable with trusting third parties who went through the trouble of stealing your information from a government database.
Once the data is leaked and being used for nefarious purposes, what even would remedial measures look like? And how would you ensure that the data is not shared or misused again? It is a can of worms which once opened, cannot be closed.
Regardless of where on the aisle you stand, you are likely to agree that facial data is personal and sensitive. The technology itself is extremely powerful and thus, can be misused in the wrong hands. If the government builds this system today, without consent or genuine public consultation, it would be almost ensuring that it or future administrations misuse it for discriminatory profiling or for suppressing minorities. So if you do live in India today, you should be very concerned about what a national facial recognition system can lead to.
This article was first published in The Deccan Chronicle. Views are personal.
The writer is a Policy Analyst at The Takshashila Institution.
There’s more to India’s woes than data localisation
The personal data protection bill is yet to become a law and the debate is still rife on the costs and benefits of data localisation. It is yet to be seen officially if the government is going to mandate localisation in the data protection bill and to whom it is going to apply. Regardless of whether or not data localization ends up enshrined in the law, it is worth taking a step back and asking why the government is pushing for it in the first place.
For context, localisation is the practice of storing domestic data on domestic soil. One of the most credible arguments for why it should be the norm is that it will help law enforcement. Most platforms that facilitate messaging are based in the US (think WhatsApp and Messenger). Because of the popularity of these ‘free services,’ a significant amount of the world’s communication takes place on these platforms. This also includes communication regarding crimes and violation of the law.
This is turning out to be a problem because in cases of law violations, communications on these platforms might end up becoming evidence that Indian law enforcement agencies may want to access. The government has already made multiple efforts to make this process easier for law enforcement. In December 2018, the ministry of home affairs issued an order granting powers of “interception, monitoring, and decryption of any information generated, transmitted, received or stored in any computer,” to ten central agencies, to protect security and sovereignty of India.
But this does not help in cases where the information may be stored outside the agencies’ jurisdiction. So, in cases where Indian law enforcement agencies want to access data held by US companies, they are obliged to abide by lawful procedures in both the US and India.
The bottleneck here is that there is no mechanism that can keep up with this phenomenon (not counting the CLOUD Act, as India has not entered into an executive agreement under it).
Indian requests for access to data form a fair share, owing to India’s large population and growing internet penetration. Had there been a mechanism that provided for these requests in a timely enforcement through the provision of data. Most requests are US-bound, thanks to the dominance of US messaging, search, and social media apps. Each request has to justify ‘probable cause by US standards.’ This, combined with the number of requests from around the world, weighs down on the system and makes it inefficient. People have called the MLATs broken and there have been several calls for reform of the system.
A comprehensive report by the Observer Research Foundation (ORF) found that the MLAT process on global average takes 10 months for law enforcement requests to receive electronic evidence. 10 months of waiting for evidence is simply too long for two reasons. Firstly, in cases of law enforcement, time tends to be of the essence. Secondly, countries such as India have a judicial system with a huge backlog of cases. 10month-long timelines to access electronic evidence make things worse.
Access to data is an international bottleneck for law enforcement. The byproduct of the mass adoption of social media and messaging is that electronic criminal evidence for all countries is now concentrated in the US.
The inefficiency of MLATs is one of the key reasons why data-sharing agreements are rising in demand and in supply, and why the CLOUD Act was so well-received as a solution that reduced the burden on MLATs.
Countries need to have standards that can fasten access to data for law enforcement, an understanding of what kinds of data are permissible to be shared across borders, and common standards for security.
India’s idea is that localizing data will help with access to it for law enforcement, at least eventually down the line. It may compensate for not being a signatory to the Budapest Convention. It is unclear how effective localisation will be. Facebook’s stored in India is Facebook’s data.
Facebook is still an American company and should still be subject to US standards of data-sharing, which are one of the toughest in the world and include an independent judge assessing the probable cause, refusing bulk collection or overreach. This is before we take into account encryption.
For Indian law enforcement, the problem in this whole mess is not where the data is physically stored. It is the process that makes access to it inefficient. Localisation is not a direct fix, if it proves to be one at all. The answer lies in better data-sharing arrangements, based on plurilateral terms. The sooner this realized, the faster the problems can be resolved. data still
Rohan is a policy analyst at the technology and policy programme at The Takshashila Institution. Views are personal.
This article was first published in the Deccan Chronicle.
How Pegasus works, strengths & weaknesses of E2E encryption & how secure apps like WhatsApp really are
Pegasus, the software that infamously hacked WhatsApp earlier this year, is a tool developed to help government intelligence and law enforcement agencies to battle cybercrime and terror. Once installed on a mobile device, it can collect contacts, files, and passwords. It can also ‘overcome’ encryption, and use GPS to pinpoint targets. More importantly, it is notoriously easy to install. It can be transmitted to your phone through a WhatsApp call from an unknown number (that does not need to be picked up), and does not require user permissions to get access to the phone’s camera or microphone. All of that makes it a near complete tool for snooping.While Pegasus is able to hack most of your phone’s capabilities, the big news here is that it can ‘compromise’ end to end (E2E) encryption. The news comes at attesting time for encryption in India, as the government deliberates a crackdown on E2E encryption, a decision that we will all learn about more on Jan 15, 2020.Before we look at how Pegasus was able to compromise E2E encryption, let’s look at how E2E encryption works and how it has developed a place for itself in human rights.E2E is an example of how a bit of math, applied well, can secure communications better than all the guns in the world. The way it works on platforms such as WhatsApp is that once the user (sender) opens the app, the app generates 2 keys on the device, one public and one private. The private key remains with the sender and the public key is transmitted to the receiver via the company’s server. The important thing to note here is that the message is already encrypted by the public key before the message reaches the server. The server only relays the secure message and the receiver’s private key then decrypts it. End to end encryption differs from standard encryption because in services with standard encryption (think Gmail), along with the receiver, the service provider generally holds the keys, and thus, can also access the contents of the message.Some encryptions are stronger than others. The strength of an encryption is measured through how large the size of the key is. Traditionally, WhatsApp uses a 128-bit key, which is standard. Here you can learn about current standards of encryption and how they have developed over the years. The thing to keep in mind is that it can take over billions of years to crack a secure encryption depending on the key size (not taking into account quantum computing):Key Size Time to Crack56-bit 399 Seconds128-bit 1.02 x 1018 years192-bit 1.872 x 1037 years256-bit 3.31 x 1056 yearsE2E encryption has had a complex history with human rights. One the one side, governments and law enforcement agencies see E2E encryption as a barrier when it comes to ensuring the human rights of its citizens. Examples of mob lynching being coordinated through WhatsApp, such as these, exist around the world.On the other hand, security in communications and the anonymity it brings, has been a boon for people who might suffer harm if their conversations were not private. Think peaceful activists who utilize it to fight for democracy around the world, most recently, Hong Kong. Same goes for LGBTQ activists and whistleblowers. Even diplomats and government officials operate through the seamless secure connectivity offered by E2E encryption.The general consensus in civil society is that E2E encryption is worth having as an increasing amount of digital human communications move online to platforms such as WhatsApp.How does Pegasus fit in?End to end encryption ensures that your messages are encrypted in transit and can only be decrypted by the devices that are involved in the conversation. However, once a device decrypts a message it receives, Pegasus can access that data which is at rest. So it is not the end to end encryption that is compromised, but your devices security. Once a phone is infected, Pegasus can mirror the device, literally record the keystrokes being typed by the user, browser history, contacts, files and so on.The strength of end to end encryption lies in the fact that it encrypts data in transit well. So unless you have the key for decryption, it is impossible to trace the origin of messages as well as the content that is being transmitted. The weakness for end to end encryption here, as mentioned above, is that it does not apply to data at rest. If it were still encrypting data at rest, messages received by users would not be readable.At this point, the question about how secure apps such as WhatsApp, Signal, and Telegram really are, is widely debateable. While the encryption is not compromised, the larger system is, and that has the potential to make the encryption a moot point.WhatsApp came out with an update that supposedly fixed the vulnerability earlier this year, seemingly protecting communications on the platform from Pegasus.What does this mean for regulation against WhatsApp?The Pegasus story comes at a critical time for the future of encryption on WhatsApp and on platforms in general. The fact that WhatsApp waited ~6 months to file the lawsuit against the NSO will not help the platforms credibility on the traceability and encryption debate. This also brings into question the standards for data protection Indian citizens and users should be subject to. The data protection bill is yet to become law. With the Pegasus hack putting privacy front and center, the onus should ideally be on making sure that Indian communications are secure against foreign and domestic surveillance efforts.
The three elements of China’s innovation model
In November 2018, the New York Times published a series that began with a story titled, The Land that Failed to Fail. The central argument of the piece is that defying Western expectations, the Communist Party has maintained its control in China while adopting elements of capitalism, eschewing political liberalisation, and pursuing innovation. The last of these three — innovation — is the subject of this piece.What drives innovation in China? This is not merely a question about the mechanics of policy, the might of capital, the determination of dogged entrepreneurs, or the brilliance that is conjured up in university dormitories. Increasingly, it is a question that has acquired geopolitical significance, not just in the context of power politics but also in the debate over fundamental values about the political and economic organisation. In other words, the question that China’s march towards becoming a “country of innovators” raises is whether a political system that prioritises control can foster genuine innovation.Answering this requires an understanding of the key elements of the Chinese model of innovation. To my mind, there are three key components of this model—state support, a systems approach towards the development of new technologies and businesses, and building an effective “bird-cage.” There are, of course, other factors like the pursuit of prestige, the desire to rebalance the economy, the need to enhance the effectiveness of governance, and the size of the consumer market, which supports innovation. But it is the first three components that form the key pillars of China’s innovation model.Read More...
All Roads Lead to the Middle Kingdom
In January 2017, Chinese President Xi Jinping stood at the podium in Davos defending economic globalisation. He argued that the world needed to “adapt to and guide economic globalisation, cushion its negative impact, and deliver its benefits to all countries and all nations.” And in this process, “China’s development is an opportunity for the world.” All of this was, of course, in the backdrop of the beginning of Donald Trump’s presidency in the US.Addressing deputies at the National People’s Congress in March 2018, Xi doubled down on that message: "China will contribute more Chinese wisdom, Chinese solutions, and Chinese strength to the world, to push for building an open, inclusive, clean, and beautiful world that enjoys lasting peace, universal security, and common prosperity. Let the sunshine of a community with a shared future for humanity illuminate the world!"Both of those speeches reflected strength. The essential message they conveyed was that the world needed China. And under Xi, China now was surer about its destiny and keener than ever to play a larger international role. Yet as 2018 unfolded, this narrative came under severe strain. To assess how, we need to look at three dimensions: Xi’s status as the core of the Communist Party, the pushback against BRI, and the deepening competition with the US. It is the interplay of these three that is shaping China’s future.Read More...
China’s big plan for AI domination is dazzling the world, but it has dangers built in. Here’s what India needs to watch out for.
China has been one of the early movers in the AI space, and evaluating its approach to AI development can help identify important lessons and pitfalls that Indian policy makers and entrepreneurs must keep in mind.
Breaking down China’s AI ambitions
The Social Credit System is about much more than surveillance and loyalty, as popularly understood. Nudging persons to adopt desirable behaviour and enhancing social control are part of the story. But there are larger drivers of this policy. It is fundamentally linked to the Chinese economy and its transformation to being more market driven.
China unveiled a plan to develop the country into the world’s primary innovation centre for artificial intelligence in 2017. It identified AI as a strategic industry, crucial for enhancing economic development, national security, and governance.The Chinese government’s command innovation approach towards AI development is crafting a political economy that tolerates sub-optimal and even wasteful outcomes in the quest for expanding the scale of the industry. Consequently, the industry is likely to be plagued by concerns about overinvestment, overcapacity, quality of products, and global competitiveness.In addition, increasing friction over trade with other states and President Xi Jinping’s turn towards techno-nationalism along with tightening political control could further undermine China’s AI industry. Before we dive into the challenges, here’s some background.Read more here>