Artificial Intelligence through the Samaaj-Sarkaar-Bazaar Lens: A Framework for Balanced Regulation
Published March 05, 2025 | This is a working paper presented at the Takshashila Institution’s Internal Conference on ‘Analysing Emerging Technologies through the Samaj-Sarkaar-Bazaar Framework’, February 2025
Authors
Executive Summary
This working paper presents a framework for balanced AI (artificial intelligence) regulation through the Samaaj-Sarkaar-Bazaar (society-state-market) lens. It argues that effective AI governance requires harmonising these three sectors to ensure AI development serves societal welfare while enabling innovation and maintaining appropriate state oversight. Society (Samaaj) must be the foundational consideration; individuals must be treated as citizens with rights rather than as subjects or customers, and their privacy, data rights, and human agency safeguarded and given a guarantee of equitable access. The State (Sarkaar) must balance its dual roles as regulator and deployer of AI, addressing national security concerns while avoiding conflicts of interest and managing risks without stifling innovation. The Market (Bazaar) faces challenges, including high infrastructure barriers and ethical considerations, but also demonstrates potential through open-source development and startup ecosystems. The paper recommends a unified but not uniform regulatory approach with layered frameworks; distributed agency to empower communities in participatory AI development, deployment, and governance; infrastructure and enabling through public-private partnerships; and market structuring to prevent monopolies while incentivizing responsible innovation. Throughout, the framework emphasises that AI development must ultimately serve society's interests, requiring careful balancing through collaborative approaches across all three sectors. The paper also proposes that India’s digital public infrastructure (DPI) be used as a template for developing platforms to provide value and equitable access to AI for Samaaj, with privacy, security and other ethical principles coded into the system by design.
Introduction
The regulatory challenge associated with the emergence of AI requires a careful consideration of societal needs (Samaaj), state oversight (Sarkaar) and market innovation (Bazaar). AI can be considered a general-purpose technology (GPT) as described by Jeffrey Ding’s GPT Diffusion Theory, in that it has pervasive uses across various industries, and more than the first-mover advantage, which AI-related industries undeniably enjoy (semiconductor / graphics processing unit (GPU) manufacturing industries), its impact on economic growth and on society will be felt across a large timespan.
Considering the widespread impact of AI, a detailed analysis is necessary for an effective regulatory governance framework that balances various intersecting interests.
Let’s look at AI from the three dimensions of Samaaj-Sarkaar-Bazaar (SSB), and examine how each sector interacts with AI development and deployment while offering recommendations for balanced regulation.
A. Samaaj (Society)
In an AI-enabled world human interaction and agency are affected far more than with traditional technology adoption, and see sweeping changes across core aspects, such as privacy, individual rights, social structures, etc. Communities face opportunities (improved healthcare, education, daily convenience, etc.) as well as risks (job displacement, algorithmic bias, exclusion, privacy concerns, etc.). There’s also the problem of varying levels of AI literacy, which can create digital divides and entrench inequality in access to AI benefits.
In the SSB framework, Society needs to be the foremost and foundational sector. When it comes to AI, the core interactions among the three sectors should be formulated in such a way that individuals are viewed as citizens with rights, and not as subjects of the State or customers of the Market. The latter two characterisations suggest a relationship of subservience for Samaj, which leaves its rights at the mercy of Sarkaar and Bazaar.
Individual privacy, data rights, control over one’s private data, and rights of benefit from one’s own data are rights that must be protected by regulations. Ensuring equitable access to AI benefits across social strata is a very important concern of this rights-based, citizen-centric approach. Individuals should have control and be able to maintain their agency even in increasingly AI-driven systems. This also means allowing society to be a part of the solution and the solution-making process, through elected representatives (which is Sarkaar in a way), and through public consultations, civil society participation, and so on. Addressing algorithmic bias and discrimination thus becomes crucial to the SSB framework and will ensure that there are no wrongful exclusions from State’s benefits due to deployment of AI tools, no inaccurate depictions of populations, along with equity of access to AI tools without discrimination.
There is also the consideration of the impact of AI on jobs. As this paper notes, “while AI is highly likely to transform employment in all sectors, especially in services, when adopted responsibly, it's unlikely to cause mass unemployment in the near future. AI’s adoption in real-world settings is often slower due to factors such as implementation costs, process changes, and risk assessment… stresses the importance of upskilling and reskilling to adapt to an AI-driven economy.”
It is for this reskilling and upskilling that Sarkaar and Bazaar will need to take steps. It is in the Bazaar’s interest also to support Samaaj in this, so that the pool of talent feeding the Bazaar is replenished adequately.
All these considerations of the Samaaj’s interests offer a lens to study AI development and analyse its growth in a way that guides it towards the Samaaj’s welfare, as espoused by the SSB framework.
The deployment of facial recognition systems in public spaces led to questions about the balance between security benefits, privacy concerns and concerns about wrong outcomes. Public resistance then led to the idea of consent-based implementation frameworks, creating a precedent for public participation in technological decision-making.
B. Sarkaar (State)
The response from governments worldwide when it comes to AI regulation ranges from China’s restrictive approach, the EU’s rights-based framework, to the US playing flip-flop with AI-related executive orders which put onerous restrictions on the Market in terms of transparency and disclosures.
State actors both regulate and deploy AI. The deployment could be for governance purposes, in which case they act as the State. But it could also be deployed by public sector undertakings for their business purposes, in which case, they act as the Market. There is a facet of conflict of interest here which needs to be reconciled, because the regulator is the producer here.
Even when the State deploys AI for governance purposes, it has to follow regulations that are, ironically, laid down by the State itself. This, of course, leads to conflict of interest. For instance, the Digital Personal Data Protection Act, 2023 gives the Indian State powers to exempt itself from data governance responsibilities and accountability.
Another perspective that States have to consider is AI sovereignty and its national security implications. Thus, for instance, the US policy on Advancing United States Leadership in Artificial Intelligence Infrastructure, which prioritises national security and technological leadership of the US. Such policies can risk trampling on the rights of the Samaaj. However, one example of how the State can harmonise the interests of Samaaj with other considerations of the State is seen in the way the US policy emphasises clean power for next-generation data centres, addresses community interests and economic impacts, and says that AI infrastructure development must proceed without raising energy costs for Americans. It also says that the transition to advanced AI infrastructure must create opportunities for American workers, not just technology companies. This shows that coexistence and collaboration is possible.
Markets require adequate support and freedom for innovation. Regulations should not be too onerous. The US government recently rescinded an executive order signed by the previous president, Biden, which required developers of AI systems to share the results of safety tests with the US government before they were released to the public. The rescinded order was considered by some to be too onerous and adding administrative burdens on a field that is fast changing and hence has to remain agile.
Regulations should balance innovation with risk management with a proper cost-benefit analysis of policies before enacting them. There can be various kinds of risks:
Existential Risks: These include potential loss of control over advanced AI systems, misaligned artificial general intelligence (AGI), and irreversible technological decisions. The collapse of algorithmic financial systems and critical infrastructure failures also fall in this category.
Societal Risks: Broader societal consequences include job displacement, widening inequality, erosion of privacy, exclusion of communities from benefits, manipulation of public opinion, and the concentration of power in the hands of AI-capable entities.
Individual Risks: Risks to personal autonomy, privacy, economic security, algorithmic discrimination, surveillance, addiction to AI-powered systems, loss of human agency in decision-making, and psychological well-being are some of the potential threats.
Economic Risks: These include market concentration, industry disruption, systemic financial risks from AI trading systems, and potential economic instability from rapid technological change. There are also risks to traditional business models and labour markets.
Against these risks the benefits at the societal, individual and economic level must be weighed. For each proposed regulation, the risk mitigation effectiveness of the policy proposed must be evaluated, along with its benefit enhancement potential and implementation feasibility. The key trade-offs are likely to be:
Innovation vs. Safety
Speed vs. Control
Freedom vs. Protection
Economic Growth vs. Stability
Individual Rights vs. Collective Good
To ensure all this, the State must develop its technical and overall capabilities. There has to be technical capacity for a democratic oversight mechanism with active participation from the Samaaj and the Bazaar.
Without considering these factors, the State can also overturn the fine balance between national interests, geopolitics, and global collaboration and must guard against that.
India's development of digital public infrastructure (DPI) demonstrates how states can create foundational platforms that enable innovation while maintaining public oversight and interest. The Unified Payments Interface (UPI) technology stack has privacy and security baked right into its code by design. UPI sets a wonderful standard for a participatory and collaborative SSB mechanism. AI could be plugged into this DPI without worrying too much about privacy and other concerns, because the base layers have taken care of that.
C. Bazaar (Market)
Private sector AI development has seen rapid advances in terms of foundational research, innovation, and applications for various use cases. The enormous infrastructure requirements of AI do create a high entry barrier for smaller players, developers, and developing nations. This also has a huge environmental impact that is detrimental to the Samaaj. High energy demand has forced a retreat on renewable energy commitments and a reliance on coal- and natural gas-based energy sources. High quality data is another bottleneck. All of this means that the development of AI capabilities is being shaped by a small number of big corporations and wealthy nations, which brings their biases and priorities into play.
However, the open-source model of development is also seeing promising activity. DeepSeek's recent release of open-source AI models, which have shown better performance at a fraction of the current cost and infrastructure needs, illustrates how market forces and open-source development can drive both innovation and accessibility. This development challenges the narrative that significant AI advancement requires massive corporate resources, suggesting a more distributed future for AI development. There is a growing startup ecosystem emerging in AI applications as well. This means more choice and distributed control for both Samaaj and Bazaar.
While such developments push boundaries in technical innovation, ethical considerations of privacy, data governance, copyright and harm are also pushed to their limits. Law and policy frameworks have not kept pace. The void has been filled by Market-driven solutions for AI safety and ethics, though questions remain about their adequacy and comprehensiveness.
The ideal state is “Low Costs, High Access”. But even a “Low Costs, Low Access” scenario can be a development barrier, because technical barriers will limit participation in AI development. India, for instance, faces a low access scenario. Policy solutions, therefore, must focus on education and capability building to democratise AI development skills and knowledge. “High Costs, Low Access” spells absolute market failure, calling for structural reforms to prevent monopolistic control and create public infrastructure.
Even in developed countries, the situation can be described as a “High Costs, High Access” scenario, where large technology companies dominate the market. Policies in such a situation should target cost barriers through infrastructure sharing and subsidies. The US policy (Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure) dated January 14, 2025, on leasing federal lands to private entities is one such policy.
Another point to consider is that AI has a huge impact as a general-purpose technology (GPT) in the mould of Jeffrey Ding’s GPT Diffusion Theory, as noted previously. It has pervasive uses across different industries and sectors. This provides a huge opportunity for its exploitation, often at the expense of Samaaj’s interests. There is a massive scope for profits through the operation of a free market. It is, however, in everybody’s interests to make sure that all of this is geared towards Samaaj as a foundational interest.
As already noted, DeepSeek’s example demonstrates how market forces can drive both innovation and accessibility without requiring massive corporate resources.
D. Intersectional Analysis
Let’s look at the intersections in the Samaaj-Sarkaar-Bazaar framework.
Samaj-Sarkaar Interface
The Samaaj-Sarkaar interface demands robust and effective mechanisms for government accountability in AI deployment. The deployments need to guard against bias, exclusion, discrimination, and violence. The redressal mechanisms must be responsive, quick, and effective, and not just procedural. The deployment process and redressal mechanisms should not be controlled by bureaucrats, politicians, and judges alone—in effect, the State. They need participation from civil society and experts, too. The protection of citizen rights in this entire process will require digital literacy initiatives across the board, from the legislative branch to the executive, the judiciary as well as citizens.
Sarkaar-Bazaar Interface
This interface requires carefully calibrated regulatory frameworks and market incentives to promote responsible innovation. The State will not have the relevant capabilities to understand and promote this always-in-flux sector. Public-private partnerships will be the need of the hour and will require innovative market incentives for responsible AI, without being too restrictive or cumbersome. Copyright laws, for instance, will have to be light touch, weighing individual rights against larger societal benefits. For instance, there could be broader "public interest" exemptions in a copyright regime that can be applied to AI development when it serves national strategic interests, essentially considering societal benefits over individual rights. There could be openness to allowing text and data mining for research and innovation purposes, with limitations on subsequent commercial exploitation. Competition laws will have to be robust to prevent anti-competitive practices, monitor and regulate vertical and horizontal integrations which can create entry barriers for smaller players, etc.
Another perspective to be considered here is the co-development and maintenance of DPIs by Sarkaar and Bazaar. India, through India Stack and other public-private partnerships in DPIs, has shown the way ahead in developing robust digital platforms that have privacy and security (and other ethical design principles) coded into their frameworks by design, and not as an afterthought. These platforms have been built to work at population scale, and provide a template of how these systems need to be designed to provide value, access, and security to Samaaj.
Bazaar-Samaj Interface
This interface will include reskilling and upskilling of Samaaj to handle the impact of AI on jobs, community engagement in AI development, and ethics in business practices. The evolution of product development would do well to consider societal needs. The Bazaar has to effectively care for privacy and data governance concerns. These aren’t just the Sarkaar’s responsibility.
E. Recommendations for Balanced Regulation
This paper provides some recommendations based on the SSB framework, along the dimensions presented in that framework.
Unified but Not Uniform Approach
This calls for creating a shared infrastructure and shared regulatory frameworks that are unified in purpose but not uniform in methods, recognising the need for contextual solutions.
The regulatory frameworks need to be layered. They should adapt to different AI applications and contexts, while maintaining core principles and allowing innovation in implementation.
The core principles of data protection and privacy, security, ethics, intellectual property protection, control over private data, rights over generated data in a calibrated manner, etc. need to be enunciated clearly, preferably in one single act of legislature.
For this layered approach to work, the regulatory framework should be designed in a democratised manner. There cannot be a centralised, top-down approach. Sector-specific guidelines should be developed within a common framework. For instance, for the fintech sector, there can be different regulatory frameworks flowing from a broader central one.
India’s DPI provides a good template for developing AI platforms and products which can provide value and equitable access to Samaaj, with security, privacy and other ethical principles coded into the system by design. This can be leveraged to plug in AI use-cases on a context-sensitive basis, so as to reduce entry barriers for Samaaj and Bazaar.
Distributed Agency
This approach aims to distribute the ability to solve problems, empowering individuals and communities to become part of the solution instead of being passive recipients. The idea is to create platforms where people can engage in problem-solving within their own contexts.
Local communities should have the power to participate in AI governance decisions through established, and not ad hoc, mechanisms.
Mechanisms should be created for public participation in AI policy development, and not just in post-implementation redressal mechanisms.
Community-led AI initiatives should be supported.
Infrastructure and Enablement
This stems from the concept of having an ecosystem of platforms that play different roles, instead of one single platform.
Digital public infrastructure should be created for AI development. These context-independent foundations will serve as platforms for interconnected, interoperable, and scalable ecosystems. The compute infrastructure and the models built using generic data will fall into this category. The Sarkaar can fund this to some extent and offer incentives to create more open infrastructure. This should ideally be built through public-private partnership, the public sector for funding and regulatory framework, and the private sector for its enterprising nature and agility. As the ultimate beneficiary, the Samaaj should be at the centre of all objectives.
There should be some context-aware layers that allow co-creation of tools that build trust. This will help various stakeholders to work together with a shared understanding of the use case and context. This needs to be driven by the private sector, since it is best suited to understand the use cases. The role of the Samaaj here is to guide through civil society and direct participation.
Context-intensive layers should allow the deployment of solutions in specific sectors. These can be built as ground-up initiatives as well.
Regulatory sandboxes for AI innovation should be mandatory.
Shared resources for AI safety research must be established. Note that academia and society have a big role to play in this.
Market Structuring
This will make the market truly open and free, expanding positive externalities, removing barriers, reducing information asymmetry, etc.
Competition policies that prevent AI monopolies should be implemented. Vertical and horizontal integrations need to be carefully monitored. The jurisprudence has to keep pace as well.
Incentives for responsible AI development should be created. Instead of onerous mandatory disclosures, these should be made voluntary to keep them balanced.
AI startups and innovation ecosystems should be supported.
Conclusion
The Samaaj-Sarkaar-Bazaar framework is a useful lens to look at emerging technologies like AI. It emphasises putting Samaaj at the centre of all interactions, not just as beneficiaries in the role of citizens with rights associated with the use of these technologies, but also as participants in the development and deployment of them. It proposes distributed agency to empower individuals and communities, and a shared infrastructure and shared regulatory frameworks that are unified in purpose but not uniform in methods, recognising the need for contextual solutions.