Swaraj, in his detailed blog post criticizing the DPIIT AI Working Paper, had highlighted the absence of a strong jurisprudential basis in the Working Paper’s proposals and the supporting reasoning. Building on this missing link of jurisprudential rigour, Shivam Kaushik looks at Kant’s distinction between noumena and phenomena to critique the DPIIT Committee’s approach and the Working Paper’s methodology. Shivam is a practicing lawyer based in Delhi. His interest lies in legal issues posed by emerging technologies.

One Nation, One License, One Big Shortcut: Doctrinal Stagnation in the DPIIT AI Working Paper
By Shivam Kaushik
Where to Start?
That was the first thought that crossed my mind when I sat down to write the first words on the recent Working Paper put out by the DPIIT Committee on Generative Artificial Intelligence and Copyright. Titled “One Nation One License and One Payment: Balancing AI Innovation and Copyright,” the paper evaluates the legal issues arising out of the use of copyrighted works as training data for AI models (it explicitly leaves out the ‘output’ aspects of AI models, which will be dealt with in the next Working Paper). Based on this limited assessment, the Committee proposes a mandatory “blanket” license with statutory royalty payments, purportedly to ensure lawful access to AI developers while guaranteeing fair compensation to creators.
In what follows, I focus on the Committee’s approach and the Working Paper’s methodology as central concerns. The paper refuses to answer the core legal questions of copyrightability, infringement, and fair dealing in the context of GenAI, while still prescribing a sweeping statutory licensing regime and quietly assuming those questions have already been answered. It lets policy preferences lead, and legal analysis follow, effectively inverting the usual order of legal reasoning. In doing so, it risks elevating copyright from one right among many to a kind of super-right, conveniently insulated from doctrinal scrutiny.
In a diatribe against the Working Paper, the starting point matters. It helps the reader understand the fundamental notions and assumptions of the people espousing a particular stance (including myself). For example, the paper notes that a majority of the tech/AI industry stakeholders consulted by the Committee advocated for a “blanket exception” (the Working Paper mentions the word “blanket” eighteen times. Maybe it’s just the weather) for Text and Data Mining (TDM) to enable GenAI training on copyrighted works. Simply put, they don’t want to pay for using copyrighted works in AI training datasets. But if that is what they want, why did they not argue that such training does not infringe at all? Or that copyright does not extend to such uses?
It is because they appear to adopt what I would call a “blanket conception” of copyright: if a copyrighted work is a tumbler, then every single letter, or word inside the tumbler, whether expressive or non-expressive, is off limits. Any use is an infringing use. Although the Committee never explicitly endorses this conception, the structure of the report suggests that it operates on this very assumption. How else can you justify a mandatory “blanket” license in favour of AI developers using copyrighted works in AI training without assuming infringement?
What is even more striking is that though the paper states it “does not attempt to resolve these questions or offer definitive conclusions on whether infringement is made out and/or the ‘fair dealing’ exception applies” (p.22), it dedicates nearly ten pages (pp. 13–22) to explaining why AI training may involve copyright infringement—only to avoid taking a position on the issue eventually. The asymmetry is striking! If the core legal questions of copyrightability, infringement, and fair dealing remain open, doesn’t that make the paper and its recommendations premature? And more importantly, if its recommendations are adopted in law, would those underlying questions not be shaped, if not rendered redundant, by the new framework?
Learning from Kant to Say Can’t: What the Committee Sees v. What it Refuses to See
To better explain the Committee’s approach, it is helpful to draw on Kant’s distinction between noumena and phenomena. As per Kant, noumena are things-in-themselves, the underlying reality that cannot be fully grasped by the human mind, while phenomena are observable appearances shaped by human perception and senses. The Committee appears to treat the legal questions of copyrightability, infringement, and fair dealing in respect of GenAI as noumena: too complex, too indeterminate, and too inconvenient to conclusively answer. A convenient approach in view of the inconvenient questions posed by AI that challenge our notions and assumptions about copyright.
As a result, the Committee shifts its attention toward the phenomena: the observable consequences of AI training, the scale of data ingestion, including copyrighted works, and the commercial context in which AI models operate. The result is a sleight of hand. By declining to conclusively address the underlying legal questions, the Committee creates the rhetorical space to justify a policy solution that creates an impression that the underlying questions already been answered.
The problem with treating core copyright questions as unknowable abstractions is that doing so reverses the proper order of legal reasoning. Under ordinary circumstances, one first determines whether a legal right exists, whether it has been infringed, and then examines whether an exception applies. Only after answering these questions does one consider whether the law should be amended to better reflect technological realities. But the Committee’s approach flips this sequence. Instead of legal analysis informing policy, policy preferences seem to be shaping (or even pre-empting) legal conclusions.
Indeed, if the blanket licensing scheme is to be adopted, the practical effect would be to freeze the doctrinal evolution of Indian copyright law in the context of generative AI. Courts would have little reason to adjudicate the meaning of reproduction, the scope of substantiality, or the contours of fair dealing when statutory licensing already supplies the path of least resistance. In this way, the Committee’s proposal risks not merely influencing legal interpretation but replacing it with a policy framework that short-circuits the judicial process. Thus, under the cloak of practical problem-solving, the law itself was (in)advertently settled. Though it needs mentioning that the paper refers to the pending ANI v. OpenAI litigation before the Delhi High Court and other pending cases (p.22). Yet on the pretext that “Relying on case law to resolve these complex issues may take time, whereas the need for clarity on the underlying issues is much more urgent,” (p.22) the paper proceeds to propose the policy panacea while avoiding any concrete position on the underlying issues themselves.
Conclusion:
To the best of my understanding, infringement in the case of AI training (which is what the present Working Paper limits itself in terms of scope to) is less than a foregone conclusion. I will expound these concerns in a later post. But before I sign off, it is imperative to note that the Committee’s most consequential misstep lies not merely in its conclusions, but in its analytical method/method of analysis. It is this methodological confusion, particularly the unresolved slippage between theory and practice that runs through the Report, that warrants attention at the outset. Here, I am reminded of a quippy quote by Leonardo (Di Vinci, not DiCaprio) who wrote:
“Those who are in love with practice without knowledge are like the sailor who gets into a ship without rudder or compass and who never can be certain whither he is going. Practice must always be founded on sound theory…”
To build a coherent legal framework for generative AI, one must begin with a solid legal analysis. Without that foundation, the Working Paper risks steering Indian copyright law into unfamiliar waters without a rudder or a compass, confident in its destination, but uncertain of the route.
