
The increasing adoption and deployment of artificial intelligence (“AI”) enabled tools, platforms, and solutions by market participants in the financial sector, including the securities markets, is now widely recognised, both in India[1] and globally[2].
The Securities and Exchange Board of India (“SEBI”) has long prescribed reporting norms for AI use by regulated intermediaries[3] and released a consultation paper[4] to solicit public feedback on potential amendments to [various regulations, affixing sole responsibility for consequences of AI use]. While not ultimately adopted, these proposals offered a glimpse into the regulatory thought process.
Since then, the rapid evolution in the “state of the art”, particularly the emergence of Large Language Models and Generative AI, has prompted regulators and Governments to explore new frameworks for AI governance. Globally, the focus has shifted from whether to regulate AI to how best to do so – balancing innovation with safeguards against risks such as bias, opacity, and systemic failures.
Globally, the updated International Organization of Securities Commissions’ (“IOSCO”) documentation report on AI in capital markets[5] and the Organisation for Economic Co-operation and Development’s (“OECD”) AI principles[6] have served as reference points for national regulators. India’s regulatory posture has similarly evolved. SEBI has consistently demonstrated an intent to develop contextual and principle-based frameworks suited for Indian requirements, most recently with the CSCRF[7].
In this context, SEBI’s recent AI Consultation Paper on “Guidelines for Responsible Usage of AI/ML in Indian Securities Markets” (“Paper”)[8] represents a necessary and potentially transformative opportunity to establish a regulatory framework in an area where the deployment of AI usage could have far-reaching consequences across the financial ecosystem.
SEBI’s Consultation Paper
The Paper discusses the September 2021 version of the IOSCO guidelines on AI usage. Based on the underlying distilled principles, it makes recommendations on model governance, investor protection and disclosure, testing, fairness and bias, privacy, and cyber security.
While these recommendations draw on some well-established global principles, this note seeks to examine areas where potential regulation may benefit from a more nuanced approach, and where a one-dimensional regulatory perspective on AI use in securities market may prove restrictive.
Model Governance
The Paper recommends an internal governance framework for regulated entities (“REs”) using AI/ML models, requiring internal teams with the skills and expertise to monitor and oversee performance, efficacy, security, explainability, and interpretability of such models. The governance mechanism largely tracks existing legal requirements, while supplementing them with AI/ML-specific requirements.
These include:
- Board/ sub-committee level oversight;
- Maintaining audit systems and ensuring explainability of the AI/ML Models;
- External audits with SEBI oversight to ensure transparency and fairness;
- Adopting risk control measures to ensure operational resilience;
- Enabling user autonomy and agency in decision-making; and
- Ensuring responsible and ethical outcomes, including the ability to switch to manual feedback and retaining secure, time-stamped logs to allow for chronological reconstruction of events.
While the Paper notes that REs may rely on third-party AI providers and existing outsourcing frameworks such as service level agreements, it maintains that any such AI services will be treated as services provided by the RE.
Investor Protection Disclosure
Echoing the principle of transparency, the Paper proposes that REs disclose to clients their use of AI in operations that materially impact them, such as trading, portfolio management, advisory, and support services. This disclosure must include information on product features, risks, limitations, and model accuracy. It also requires disclosures regarding the quality and source of data, used to generate AI/ML-driven decisions, in clear and comprehensible language.
The Paper further requires REs to test and monitor AI models on a continuous basis, including maintaining records of input and output data for at least five years, performing shadow testing with live traffic of AI/ML models, and monitoring models to ensure that the outcomes produced are explainable, traceable, and repeatable. Further, models are expected to be fair, and not discriminate between different categories of customers.
The Case for Role-Based Regulation
For the most part, the Paper appears to proceed on the premise that AI/ML models (a term the Paper uses interchangeably to refer to all AI-enabled or driven applications, tools, and techniques) are created and deployed by REs (or on their behalf) for use in the securities market. This framing naturally lends itself to a top-down regulatory approach, one that places primary responsibility on REs, for the entire lifecycle of AI solutions deployed.
While this approach is understandable, it may not fully reflect the operational complexity of the AI ecosystem today. As recognised in the EU AI Act[9] and the updated OECD framework for AI classification[10], multiple actors operate at different points in the AI value chain. These include developers who build or fine-tune models, integrators or service providers who package and adapt them for specific industry use cases, and lastly, entities that deploy or offer the final service to users. Each actor exercises different levels of control and insight, and should therefore be subject to appropriately tailored regulatory obligations.
The updated IOSCO consultation report also recognises this complexity, observing that AI use cases in capital markets range from algorithmic trading to robo-advisory, surveillance and customer-facing chatbots, each requiring varying degrees of involvement by the RE.
Given this diversity, it may be useful to consider a calibrated, role-based regulatory approach, where responsibilities are distributed across AI actors. For example:
- Model developers could be responsible for the quality of training data, explainability features, and bias testing;
- Integrators/ service providers could ensure robustness during adaptation and delivery of the model;
- Deployers could oversee real-world monitoring, efficacy, and incident response; and
- Customer-facing REs could focus on transparency, disclosures, and user grievance redressal mechanisms.
By clearly mapping regulatory duties to actors best positioned to fulfil them, SEBI can reduce undue compliance burdens, enable contractual risk allocation (such as through service-level agreements), and support innovation while maintaining accountability.
Toward a Calibrated and Future-Ready Framework
While the Paper is a welcome step toward regulating AI use in the Indian securities market, the final framework would benefit from greater nuance and proportionality.
Key refinement areas include adopting a risk- and function-based classification of AI systems, recognising that not all use cases carry the same level of risk, and aligning obligations basis the size and role of the RE. Smaller intermediaries or those relying on third-party tools may need tailored requirements, particularly where they do not control the development of the underlying models.
SEBI could also consider aligning its framework with existing regimes like the CSCRF and DPDPA, to avoid regulatory duplication. For instance, incident reporting and system inventory requirements under the CSCRF could support AI oversight, while DPDPA principles on transparency and fairness may dovetail with AI-related disclosures. The use of “compliance as a service” certification solutions for general purpose AI models could further enhance such oversight, enabling automated compliance checks, anomaly detection, and real-time visibility into critical AI model behaviour.
Finally, REs that adopt strong governance and vendor diligence, can be given some degree of protection from inadvertent lapses in low-risk use cases, to encourage responsible innovation without unduly penalising compliant entities.
[1] SEBI, Guidelines for Research Analysts, available here; and SEBI, Cybersecurity and Cyber Resilience Framework for SEBI Regulated Entities, available here.
[2] Hong Kong Institute for Monetary and Financial Research, Financial Services in the Era of Generative AI: Facilitating Responsible Adoption, available here; Financial Services and the Treasury Bureau, Policy Statement on Responsible Application of Artificial Intelligence in the Financial Market, available here; Act on Promotion of Research and Development and Utilization of Artificial Intelligence-Related Technologies, available here; Monetary Authority of Singapore, Artificial Intelligence (AI) Model Risk Management, available here.
[3] SEBI, Reporting for AI and ML applications and systems offered and used by Market Infrastructure Institutions, available here; SEBI, Reporting for AI and ML applications and systems offered and used by Market Intermediaries, available here; and SEBI, Reporting for AI and ML applications and systems offered and used by Mutual Funds, available here.
[4] SEBI, Proposed Amendments with respect to assigning responsibility for the use of AI tools by market infrastructure institutions, registered intermediaries and other persons regulated by SEBI, available here.
[5] IOSCO, Artificial Intelligence in Capital Markets: Use Cases, Risks and Challenges, available here.
[6] OECD, AI Principles, available here.
[7] SEBI, Cybersecurity and Cyber Resilience Framework for SEBI Regulated Entities, available here.
[8] SEBI, Guidelines for Responsible Usage of AI/ML in Indian Securities Markets, available here.
[9] European Parliamentary Research Service, Artificial Intelligence Act, available here; and The European Enterprises Alliance, Joint Letter on the European Commission’s Proposal for an AI Act, available here.
[10] OECD, OECD Framework for the classification of AI systems, available here.