In the first post in this series, we saw how evolution in Artificial Neural Networks has led to the blurring the lines of ‘creativity’, or the lack thereof, between software and human beings, specifically in the case of DeepDream. We are currently at a stage in software evolution where software is not precisely a ‘tool’, but not precisely an independent ‘contributor’ either. The second part of this post attempts to place that in the context of the current copyright law. [Long post ahead]
The standard of creativity necessary for new works to be copyrightable is currently the ‘modicum of creativity’ standard as per Eastern Book Company and Ors. v.D.B. Modak and Anr. In this case, the Court held that a ‘minimal degree of creativity’ was required, that there must be ‘there must be some substantive variation and not merely a trivial variation’.
‘Creativity’, though, has long been a vague standard, and remains so here too. On the one hand, in favour of DeepDream creating ‘substantive variation’ and being ‘creative’, it can be argued that since it takes in raw input and creates an output based on its own ‘understanding’ of the patterns that it finds in them, with an application of ‘mind’ perhaps – patterns that are unexpected and unpredictable. On the other hand, it can be argued that the core images fed into DeepDream are largely unchanged, and therefore while it introduces some ‘novel’ elements, there is no more than a ‘trivial variation’. Both sides have valid arguments, and I leave it to our readers to pick which side they support.
It is significant, though, that the above is even an arguable point, today, while that would have been unimaginable with the procedural creations of software only a few years ago. Beyond DeepDream, we even have ANNs like AlphaGo, the Go playing ANN, which came up with moves that surprised even the masters in a game that has been practiced by humans for millennia.
The second crucial question herein is of who the author of this work would be, since software has no legal personality. Under Section 2 (d) of the Copyright Act, 1957, “(d) “author’ means,-
“…(vi) in relation to any literary, dramatic, musical or artistic work which is computer-generated, the person who causes the work to be created;”
The first problem here arises in its usage of the terms ‘the person who causes the work to be created’. Ascertaining who ‘causes’ a work to be created is a question of the proximity of a natural or legal person to the creation of the ‘expression’ in the content in question – the more closely or directly a person is involved in creating the ‘expression’, the more he or she contributes to it, and the more likely he or she is to qualify as a person ‘who causes the work to be created’. It must be stressed here that the proximity would need to be to the creation of the final expression, not the idea of it. This question, however, can no longer be answered quite that clearly in the case of complex, modern ANNs.
For instance, in the case of DeepDream, it is not always ascertainable if any legal or natural person caused a work to be created. There are three possible options for a ‘person who causes the work to be created’, for ANNs:
a) the person who inputs the search query/image, or
b) the creator/programmer of the algorithm; or
c) in some cases, the person who ‘taught’ the dataset its patterns.
These three apply, of course, to DeepDream as well. Now, due to the very nature of ANNs as discussed above, the programmer of the ANN has restricted control over the output of the software, and quite definitely does not contribute to the specific ‘expression’ it generates. In a scenario where ANNs enter daily public usage, giving the programmer the copyright as a rule would basically be like giving Adobe the copyright over all creations made with Photoshop. Contrarily, the person who inputs the search query/image arguably dictates the output, and therefore ‘causes the work to be created’. The counter-argument to this, however, is that at least part of the expression that is represented by DeepDream’s output, such as the pagodas in the illustration used, is entirely its own creation, based on its own understanding of the parameters involved, even if it is influenced by the user. This is where we deal with the fact that the ‘tool’ of creation of the work is no longer a mere tool.
The third case is that of the person who feeds the training datasets into the ANN, giving it the ‘knowledge’ necessary for its processing. There are several strategies for teaching ANNs, some of which involve a human actually ‘teaching’ it, (‘supervised learning’) while some don’t (‘unsupervised’ or ‘reinforcement learning’), but all of which are oriented towards getting the ‘expected output’ out of the ANN. However, while an ANNs ‘teachers’ definitely influence what it learns and create bias in its understanding, they do not influence its specific, query-to-query output. As noted in the last post, part of the conclusion of the DeepDream experiment was anomalies in exactly this context. And once again, this falls foul of aforementioned Adobe example.
In all three cases, then, the persons involved are definitely substantially removed from the process of the creation of the ‘expression’, leaving us with a confusing question of first authorship.
The second lack with the above definition is its usage of the term ‘computer-generated’, which basically presumes that the computer is merely a tool for the creation – an assumption that is no longer comfortably true. Even keeping aside the question of according ownership of copyrights to software, which requires much more ‘intelligence’ on the part of A.I, the current structure cannot deal with a chain of creation of works where the actual creator or a contributor of the ‘expression’ is not a human or a legal person.
In case the origin of the expression purported to be copyrighted cannot be fixed on a human or a legal person, there are two options:
- Allocate the ‘first authorship’ of the work to the human or legal person closest to the creation of the expression, even if she/he/it does not ‘cause the work to be created’ per se.
- More radically, declare the work in question to have no first author.
As an overall policy choice, the first option is more practicable, and is basically the one used currently, though it ignores any and all contribution of the A.I to the creation of the work, basically rendering it a tool. This is what, arguably, happened with the works created by Google’s algorithms (including DeepDream), which were sold by Google itself, with the proceeds going to the Gray Area Foundation for the Arts. This also makes sense according to the ‘incentives’ involved in creating these works, per the utilitarian model, as the AI itself is incapable of experiencing the same.
On the other hand, if the A.I.’s contribution is taken to the other extreme, A.I. lacking legal personality, the second option becomes more appealing. U.S. copyright law takes this step, in fact, insofar as the US Copyright Office has explicitly stated, in the context of the Monkey Selfie, that “Because copyright law is limited to ‘original intellectual conceptions of the author,’ the Office will refuse to register a claim if it determines that a human being did not create the work.” Of course, the position changes significantly if the A.I. is recognised to have a separate legal personality, or perhaps the status of ‘electronic persons’ as recommended by the European Parliament’s Committee on Legal Affairs draft paper noted in the beginning. This leads this argument to to its other conclusion, with a robot being treated as the owner of its ‘own intellectual creation(s)’ – to what effect, however, remains unclear.
The argument in favour of this would be that if the authorship cannot be comfortably traced to a legal or natural person, and since A.I. have no legal personality, the better option would be to just have no ‘author’. This option also has the additional appeal of adding more ‘expression’ to the public domain. A major issue with this, however, is that A.I. software is arguably simply just not that independent enough yet, and still significantly relies on human input. Furthermore, this would arguably hamper further investment and innovation in ANNs, creating uncertainty regarding the commercial viability of their outputs.
The central question would necessarily, therefore, remain that of the human versus A.I. involvement in the actual ‘creation’ of the work, and a decision between the above two choices will have to made on a case-by-case basis. We are, currently, somewhere in the middle of the evolution of A.I. to a stage where it can truly be independent (though we have arguably been here for some time). Whatever stage of A.I. evolution we are at, though, it is definitely an exciting time!