top of page
Abstract Sphere

Generative AI Governance Summit Dialogue Series 2024

生成式人工智能治理高峰對話系列2024

 

Generative AI Governance Summit Dialogue Series 2024

生成式人工智能治理高峰對話系列2024

banner 20240426 (1).jpg

Will the EU's AI Act Have a 'Brussels Effect' on China? 

歐盟《人工智能法》會對中國產生「布魯塞爾效應」嗎?

Date & Time: April 26, 2024 (Friday), 21:00–22:15 (HKT), 09:00-10:15 (EDT)

日期和時間: 2024年4月26日(星期五), 21:00–22:15 (HKT), 09:00-10:15 (EDT)

Venue: Zoom Webinar

地點: Zoom 線上研討

Language: Chinese & English (with simultaneous interpretation)

語言: 中文和英文(提供同聲傳譯)

Last month, the European Parliament officially adopted the EU AI Act, a groundbreaking piece of legislation for the European Union. Concurrently, leading Chinese legal scholars have proposed model AI laws and are actively seeking feedback from policymakers and industry experts across China.

 

In light of these recent developments, the second session of our Generative AI Governance Summit Dialogue series will focus on the EU's AI Act and explore whether it will have a "Brussels Effect" on China. During this virtual roundtable, we will delve into critical issues that are currently generating intense debate in both the EU and China. Topics will include the scope of AI legislation, management of open-source models, the risk-based regulatory framework, oversight of highly impactful AI systems, and various practical implementation challenges. We invite you to join this enlightening discussion as we navigate these important topics.

 

About the series:

In 2024, the Philip K. H. Wong Centre for Chinese Law, in collaboration with the China Artificial Intelligence Industry Alliance Policy and Regulation Working Group, will host a series of summit dialogues on generative AI regulation and governance. These events aim to bring together leading scholars and experts from around the world to discuss the governance challenges posed by generative AI and develop strategies to address them.  To watch our past events in this series, please see here: Generative AI Governance Summit Dialogue Series 2024 - Generative AI and Intellectual Property (youtube.com)

 

2024年3月,歐洲議會正式通過《人工智能法》。這一歐盟的創新性立法在全球範圍內備受矚目。與此同時,中國專家學者發布了廣受關注的《人工智能法(專家建議稿)》,目前正在積極徵集來自中國各地政策制定者和產業專家的意見和反饋。

 

在此背景下,“生成式人工智能治理高峰對話系列”第二場活動將聚焦歐盟《人工智能法》,並探討其是否會對中國產生“布魯塞爾效應”。本次活動以線上圓桌會議的形式舉行,主要關注當前在中國和歐盟社會引發熱議的關鍵問題。與會專家將深入剖析人工智能立法的範疇、開源模型的治理、基於風險的監管框架、具有高度影響力的人工智能系統的規制,以及法律實施過程中可能遇到的各種挑戰。歡迎各界感興趣人士參與!

 

關於高峰對話系列:

2024年,香港大學黃乾亨中國法研究中心將舉辦一系列關於生成式人工智能治理的高峰對話。香港大學黃乾亨中國法研究中心主任張湖月與中國政法大學教授,聯合國高級別人工智能咨詢機構成員,中國人工智能產業發展聯盟(AIIA)政策法規工作組組長張凌寒,將作為共同召集人,邀請國內外頂尖學者與專家,共同探討生成式人工智能面臨的治理挑戰以及應對策略。首場高峰對話已於2024年1月成功舉辦,聚焦知識產權問題,內容詳見:精彩回顧|生成式人工智能治理高峰對話系列-知識財產權

 

★Summit Dialogue Series One: Generative AI and Intellectual Property 第一場:生成式AI知識產權問題★

Conveners 召集人

Angela Huyue Zhang, Director, Philip K. H. Wong Centre for Chinese Law, The University of Hong Kong

張湖月, 香港大學黃乾亨中國法研究中心主任

 

Linghan Zhang, Professor at the Institute of Data Law, China University of Political Science and Law

張凌寒, 中國政法大學數據法治研究院教授

European Experts 歐盟專家

Carme Artigas, Co-Chair of United Nations Artificial Intelligence High-Level Advisory Body 聯合國人工智能高級咨詢委員會聯席主席

 

Philipp Hacker, Professor for Law and Ethics of the Digital Society, European University Viadrina Frankfurt 奧得河畔法蘭克福歐洲大學數字化社會法律與道德教授

 Chinese Experts 中國專家

Linghan Zhang, Professor at the Institute of Data Law, China University of Political Science and Law

張凌寒, 中國政法大學數據法治研究院教授

 

Weixing Shen, Professor of Law, Dean of Intelligent Rule of Law Research Institute, Tsinghua University

申衛星, 清華大學法學院教授, 清華大學智能法治研究院院長

highlights.png

Generative AI and Intellectual Property

生成式AI知識產權問題

Date & Time: January 26, 2024 (Friday), 21:30–22:45 (HKT), 08:30–09:45 (EST)

日期和時間: 2024年1月26日(星期五), 21:30–22:45 (HKT), 08:30–09:45 (EST)

Venue: Zoom Webinar

地點: Zoom 線上研討

Language: Chinese & English (with simultaneous interpretation)

語言: 中文和英文(提供同聲傳譯)

In 2024, the Philip K. H. Wong Centre for Chinese Law, in collaboration with the China Artificial Intelligence Industry Alliance Policy and Regulation Working Group, will host a series of summit dialogues on generative AI regulation and governance. These events aim to bring together leading scholars and experts from around the world to discuss the governance challenges posed by generative AI and develop strategies to address them. Recently, the Beijing Internet Court made a groundbreaking decision by granting copyright protection to an image generated by Stable Diffusion. This landmark ruling has sparked global debates. In contrast, the United States has seen at least four instances where the Copyright Office has refused to grant copyright protection to AI-generated content. In light of these developments, the inaugural event of the series will specifically focus on the IP issues surrounding generative AI. This dialogue will feature distinguished experts from both China and the United States, including the Chinese judge who presided over the above-mentioned case.

 

Please stay tuned for further details about our upcoming events, and we look forward to your participation!

 

2024年,香港大學黃乾亨中國法研究中心將與中國人工智能產業發展聯盟政策法規工作組攜手合作,聯合舉辦一系列關於生成式人工智能(AI)治理的高峰對話。香港大學黃乾亨中國法研究中心主任張湖月與中國政法大學教授,聯合國高級別人工智能諮詢機構成員,中國人工智能產業發展聯盟(AIIA)政策法規工作組組長張凌寒,將作為共同召集人,邀請國內外頂尖學者與專家,共同探討生成式AI面臨的治理挑戰以及應對策略。近期,北京互聯網法院針對AI生成內容的版權保護作出一項富有創新意義的判決,確認了生成式AI作品的可版權性。該判決在全球範圍內引發廣泛關注。同期,美國版權局再次拒絕為AI生成內容賦予版權,至今已連續四次駁回AI作品的版權註冊申請。在此背景下,高峰對話系列的首場活動將聚焦生成式AI的知識產權問題,於1月26日以線上圓桌會議的形式舉行,參會者由來自中國和美國的知名專家組成,其中包括中國「AI文生圖」著作權案一審主審法官。

 

歡迎各界感興趣人士報名參加!

 

★Summit Dialogue Series One: Generative AI and Intellectual Property 第一場:生成式AI知識產權問題★

Conveners 召集人

Angela Huyue Zhang, Director, Philip K. H. Wong Centre for Chinese Law, The University of Hong Kong

張湖月, 香港大學黃乾亨中國法研究中心主任

 

Linghan Zhang, Professor at the Institute of Data Law, China University of Political Science and Law

張凌寒, 中國政法大學數據法治研究院教授

Chinese Experts 中方專家

Ge Zhu, Deputy Tribunal Chief, The First Comprehensive Division, Beijing Internet Court

朱閣, 北京互聯網法院綜合審判一庭副庭長

 

Guobin Cui, Professor of Law, Director of the Center for Intellectual Property, Tsinghua University

崔國斌, 清華大學法學院教授、知識產權法研究中心主任

 

Qian Wang, Professor of Law, East China University of Political Science and Law

王遷, 華東政法大學法律學院教授

U.S. Experts美方專家

Jason M. Schultz, Professor of Clinical Law, New York University

Jason M. Schultz, 紐約大學法學院教授

 

James Grimmelmann, Tessler Family Professor of Digital and Information Law, Cornell University

James Grimmelmann, 康乃爾大學法學院數字信息法教授

Highlights I Generative AI and Intellectual Property

Chinese Version 中文版本 : 精彩回顾|生成式人工智能治理高峰对话系列——知识产权 (qq.com)

On January 26, 2024, the inaugural session of the “Generative AI Governance Summit Dialogue” series was successfully held, focusing on intellectual property issues. The event was co-hosted by Professor Angela Zhang, Director of the Philip K. H. Wong Centre for Chinese Law at The University of Hong Kong, and Professor Linghan Zhang from China University of Political Science and Law, who is also the leader of the AI Industry Development Alliance (AIIA) Policy and Regulation Working Group and a member of the United Nations High-Level Advisory Body on AI.

The talk featured esteemed panelists, including Ge Zhu, the Presiding Judge of the Beijing Internet Court’s “AI Text-to-Image” copyright infringement case, Professor Guobin Cui from Tsinghua University, Professor Qian Wang from East China University of Political Science and Law, Professor Jason M. Schultz from New York University, and Professor James Grimmelmann from Cornell University. The event was conducted as a Zoom roundtable and livestreamed on four popular platforms: PKULaw, XiaoeTech, Bilibili, and WeChat, attracting nearly 7,000 online viewers. The lively atmosphere and engaging discussion were highly appreciated and praised by the audience.

Linghan Zhang: In early 2024, Professor Angela Zhang and I initiated the “Generative AI Governance Summit Dialogue Series.” Our aim was to bring together top scholars and experts from around the globe for in-depth conversations about the governance challenges and response strategies arising from generative AI. Our first event focuses on generative AI and intellectual property, featuring prominent speakers from both academia and the tech industry. Looking back over two decades ago, intellectual property was one of the first legal frameworks to be challenged by internet technology. Laws like the U.S. Digital Millennium Copyright Act have since been actively adapting to technological advancements.

 

 

U.S. Experts

 

1. Fundamental Challenges to Copyright Doctrines

 

Angela Zhang: Professor Mark Lemley from Stanford University has highlighted that generative AI presents challenges to two core copyright doctrines: the idea-expression dichotomy and the substantial similarity test for infringement. Can you briefly explain these challenges? Do you have any suggestions on how we should respond to them?

The idea-expression dichotomy distinguishes between ideas, which cannot be copyrighted, and their specific expressions, which can be.

The substantial similarity test for infringement is used to determine if a new work is too similar to an existing copyrighted work.

James Grimmelmann: Generative AI enables creators to produce highly rich content by inputting simple instructions, i.e., prompts. If we adhere to the traditional “idea-expression dichotomy”, copyright law should only protect the prompts, as they represent the creator’s expression. However, if the prompts are too brief, they might not be protected by copyright law at all. In addition, copyright law determines infringement according to the “substantial similarity” test. If the creator’s expression is embodied in the prompts, it is these prompts that should be compared, even though the AI-generated content (AIGC) may vary; conversely, two distinct prompts could generate substantially similar content. These issues pose considerable challenges. Copyright emerged with the advent of printing technology, which significantly lowered the production costs of artistic works. It took 150 years from the invention of the printing press for the first copyright law to come into existence. We are now in the very early stages of the generative AI era. Our understanding of how generative AI fosters creativity and the design of appropriate incentive mechanisms is still limited. It is also hard to identify the potential risks associated with AIGC. All these factors contribute to the prevailing uncertainty surrounding the current situation.

 

Jason M. Schultz: The theory of copyright law posits that its purpose is to serve as an incentive, encouraging individuals to engage in creative pursuits. With the widespread adoption of generative AI, both the cost of creation and the barriers to entry have been significantly reduced. If the essence of art lies in the conception of expression, while expression itself is merely a mechanical execution process, then the “idea-expression dichotomy” warrants re-examination in the context of generative AI simplifying this process. In a world dominated by generative AI, the necessity of copyright protection as a creative incentive becomes highly debatable.

 

2. Copyright Protection for AI-Generated Content (AIGC)

 

Angela Zhang: Recently, the Beijing Internet Court made a groundbreaking decision by granting copyright protection to an image generated by Stable Diffusion. This landmark ruling has sparked global debates. In contrast, the U.S. has seen at least four instances where the Copyright Office has refused to grant copyright protection to AIGC. What are your thoughts on this?

 

James Grimmelmann: Interestingly, the approaches taken by China and the U.S. are not that different. All relevant cases in the U.S. have encountered issues during the copyright registration phase. These cases can be viewed as a “test” meant to establish a broader copyright law precedent. The creators involved had either advocated for registering the AI itself as the author or provided insufficient disclosure of human prompts and AI involvement, failing to emphasize the importance of humans in the creative process. The nature of the case in the Beijing Internet Court is entirely different. It is a highly specific infringement lawsuit, with the generation process and human prompts thoroughly disclosed. Therefore, the differences between the two jurisdictions might not be substantial. In the U.S., a case like the one in the Beijing Internet Court could potentially have a similar outcome. Although the U.S. Copyright Office denied registration for “Théâtre d'Opéra Spatial,” an award-winning AI-involved art piece with over 600 prompts, the creator did not reveal the specific content of the prompts or the initial AIGC. Currently, the U.S. Copyright Office and courts are encouraging creators to actively disclose their involvement; otherwise, they risk being denied copyright protection due to insufficient evidence.

 

Jason M. Schultz: Furthermore, U.S. courts are considering ways to establish criteria for assessing the copyrightability of AIGC. This entails a general challenge: should judgments be made from an ex-ante or ex-post perspective? Examining facts after a dispute arises seems to better elucidate contextual circumstances. Besides, the notion of “originality” warrants further clarification, as the current legal requirements for originality are rather minimal.

 

3. Copyright Infringement and Fair Use

 

Angela Zhang: In a series of U.S. lawsuits, the main question was whether using copyrighted works to train AI is considered infringement. The dominant view is that this practice is exempted under “fair use” since AIGC is transformative. However, in certain instances, AIGC may closely resemble the original works, as seen in the New York Times case. In defense, OpenAI contended that the overlap arose because the prompts supplied by users were highly suggestive. What are your thoughts on this matter?

Fair use is a doctrine in copyright law that allows for the limited use of copyrighted material without obtaining permission from the copyright holder or compensating them.

James Grimmelmann: In the U.S., there are two primary trends regarding “fair use.” The first is transformative use, which involves creatively adapting the work of others. The second is copying materials without artistic expression, such as research archives and search engines; while these systems feed on copyrighted data, their output does not compete with the original works. However, generative AI straddles both trends. It not only feeds on copyrighted works but also produces expressive derivative content. As a result, generative AI does not fit neatly into either of the existing fair use scenarios.

 

Jason M. Schultz: Assessing whether the use of copyrighted data for AI training constitutes fair use requires considering the AIGC itself. Even if a company solely relies on New York Times web pages to train its AI system, it might generate millions of unique expressions, with only 0.1% regarded as potentially infringing. A crucial debate centers on the difference between AI companies automatically scraping data for training and obtaining specific authorization from copyright holders. Two main concerns arise from this issue: First, competition. To encourage competition among different AIs, we cannot limit access to training data exclusively to the wealthiest companies. However, acquiring licenses from all copyright holders can be extremely costly. Second, bias. Using a broader range of data for AI training helps prevent biased language. For instance, in the U.S., the left opposes AI using its data for training, while the right is more accommodating. If a licensing regime is implemented, could training data become dominated by right-wing perspectives? Furthermore, the AIGC largely depends on the prompts, making it challenging for courts to determine whether AI has truly copied a specific book. While users can force AI to copy, it is uncertain how many would actually do so. This leads to another question: if the AIGC does exhibit substantial similarity, who should be held responsible—the user or the AI service provider?

 

James Grimmelmann & Jason M. Schultz: There is no one-size-fits-all answer to this question, as the purposes and contexts in which users employ AI vary. We can only analyze the question on a case-by-case basis. Copyright law should be reasonably tolerant of private space. In the case of the Beijing Internet Court, where the AIGC is uploaded into the public domain, the nature of the issue changes, especially when it involves unfair competition. In sum, generative AI is still in its early stages of development, and there are no definitive answers to copyright issues. We must rely on more judicial cases to expand our understanding. A well-designed legal system should enable both large and small companies to flourish, guarantee equal access to AI for everyone, and promote fair competition. If the law only permits big companies to negotiate with other big companies, it does not resolve the problem, and artists will not receive fair compensation. Moreover, the alleged financial losses suffered by copyright holders might result more from increased competition, rather than just someone stealing their works through ChatGPT.

 

Chinese Experts

 

Ge Zhu: In the Chinese “AI Text-to-Image” case, the plaintiff used a large AI model to generate an image called “Spring Breeze Has Brought Tenderness.” The defendant, a poetry author, used the image as an illustration while publishing his poetry. The plaintiff claimed that the defendant removed the signature watermark from the image and uploaded it to social media, violating the plaintiff’s right of authorship and information network dissemination rights. The court ruled that the image possesses identifiable differences and the plaintiff’s original intellectual investment, which meets the definition of “works” in China’s Copyright Law. The image was considered a work of art, and the copyright therefore belonged to the plaintiff, since the AI model itself could not be considered the author.

 

Regarding the intellectuality element, it requires the reflection of a natural person’s intellectual investment. In generating the image, the plaintiff made significant intellectual contributions, such as character design, selecting prompt words, arranging the order of prompt words, and choosing images that met expectations. As for the originality element, it mandates that the work be completed independently by the author and exhibit original expression. Determining whether the AIGC reflects the author’s personalized expression must be assessed on a case-by-case basis. In the present case, the plaintiff arranged and selected expressive details, such as image elements, layout, and composition, based on his aesthetic preferences and personal judgment, all a reflection of his own will. The large AI model functioned as the author’s creative tool, akin to a paintbrush or a camera.

 

Furthermore, applying the law in novel and challenging cases necessitates balancing various interests, including the interests of both parties, the groups they represent, the value choices of legislators, and social and public interests. The present case incentivizes people to use new tools to create, aligning with the legislative purpose of copyright law. As creators increasingly adopt AI tools, the income of software developers may also rise, creating a virtuous cycle that positively affects industrial development. Regarding public interest, it is difficult to distinguish between AIGC and human creations under existing technical conditions. If human creations are protected while AIGC is not, it could lead to negative incentives in society, discouraging individuals from using new tools or hiding the use of AI, potentially infringing upon the public's right to know.

 

Qian Wang: I must respectfully disagree with the presiding judge. Most of the so-called “problems” and “challenges” attributed to generative AI are not real. The only valid question is whether using copyrighted works to train AI falls under fair use. Generative AI does not pose any challenges to the idea/expression dichotomy and substantial similarity. First, let’s consider the idea/expression dichotomy. In the context of AI, we are discussing whether the prompts are considered “ideas” in relation to AI-generated images. If they are, then they are not protected. Our focus is not on determining if the prompts constitute a work or expression, but rather on whether the images generated by AI based on these prompts qualify as expressions of users under copyright law.

 

Suppose an art school teacher, who is also a poet, writes a poem spontaneously and asks a class of 30 students to each create a drawing based on the poem. While the poem written by the art teacher is undoubtedly a “work,” it serves as an idea in relation to the students’ drawings. The poem cannot dictate the composition of each student’s artwork, as they will interpret the poem according to their own thoughts and use their creativity to generate corresponding images. Regardless of how intricate and sophisticated the words used to describe the image may be, they cannot determine its composition. Similarly, even if the prompts are detailed, AI-generated images do not constitute a work. For instance, I once entered an English poem describing a sunset scene as a prompt into two large models, resulting in entirely different images. The description in the poem was detailed enough, yet one could write another 1,000 lines without obtaining identical results. If selecting and inputting prompts is considered a creative act, why does one creative act produce so many varied expressions? The only explanation is that the text, in relation to the picture it describes, serves merely as an idea, not an expression.

 

Second, AI does not challenge substantial similarity, which adopts an objective criterion for assessment. It focuses solely on the similarities between the plaintiff’s work and the alleged infringing content. Whether the content in question was generated by AI or created by humans is irrelevant. Furthermore, the U.S. Copyright Office is unlikely to register the image involved in the Beijing Internet Court case, according to the Office’s four rulings and guidelines. The Office has not accused any applicant of forgery, but instead seeks to clarify whether AIGC can be registered as a work. In the “Théâtre d'Opéra Spatial” case, the Office did not dispute the applicant’s claim that more than 600 prompts were used. However, it still determined that this was not a human-created work because it was autonomously generated by AI.

 

Guobin Cui: Firstly, regarding the idea/expression dichotomy, if the creator only provides prompts, although the prompts themselves may constitute a written work, the images generated by AI based on these prompts typically do not contain the creator’s original expression. It is only after the creator selects an image and then repeatedly modifies the expressive details or compositional elements through prompts or other methods that the expressive aspect of the image can be considered original. The difference between my opinion and Professor Qian Wang’s is that he believes even in the latter situation, originality is still not present.

 

Secondly, I concur with Professor Qian Wang regarding the likelihood of the U.S. Copyright Office granting copyright protection to the image in the Beijing Internet Court case. The Office has denied protection to many AI-generated images, asserting that they lack originality. The image in the Chinese case involves even fewer prompts and modification details than its U.S. counterparts. For instance, in the “Théâtre d'Opéra Spatial” case, after the creator selected the image, he first established the larger framework, then modified the details, and repeatedly used traditional tools such as Photoshop to refine the content. The entire process took more than 80 hours. However, the U.S. Copyright Office still believes it lacks originality, which I consider an overly strict standard. If this standard were applied to the image in the Chinese case, it is impossible that the Office would recognize its originality.

 

Finally, concerning infringement and fair use, both Professors Grimmelmann and Schultz generally agree that using copyrighted works for AI training may constitute fair use. Professor Schultz primarily considers two aspects: competition and neutrality. First, imposing licensing requirements for AI training processes could hinder fair competition, not only between companies but also between countries. Second, if some copyright holders agree to license their works while others do not, it may result in a biased position in the AIGC. Moreover, the two U.S. professors seem to suggest that fair use may apply more readily to non-commercial purposes, while purely commercial purposes warrant further examination. However, I argue that even purely commercial purposes should be considered fair use. If commercial AI companies are required to pay substantial licensing fees for all training data and identify every individual’s contribution, it could lead to market failure and increased social costs, which are entirely unnecessary. Of course, if the AIGC infringes upon copyright, the legal liability of the content at the output stage should be investigated. This issue is separate from whether the use of data during the training stage constitutes fair use. They are two distinct matters.

 

 

 

Discussion

 

Jason M. Schultz: I agree that when prompts are more constructive and creative, the connection between them and the AIGC becomes stronger, thus facilitating the “transmission” of original expression. However, if only a few prompts are provided and the AI is left to complete the work, achieving this “transmission” effect becomes challenging. This intricacy is what makes the thought/expression dichotomy so interesting.

 

James Grimmelmann: Judge Ge Zhu raised an intriguing and thought-provoking point concerning the motivations and incentives for people to use AI in producing artistic works. The objective distinction between AIGC and human-created works is minimal. If copyright law were to declare that all AIGC is not protected, many individuals might be tempted to use AI while lying about and denying its involvement in their final products. Although I am unsure if this issue can ever be fully resolved, a legal system that establishes stark distinctions between rights in human-generated and AI-generated content could indeed create negative incentives.

 

Guobin Cui: I agree with Professor Schultz. When AI generates an initial image and a human creator uses detailed prompts to modify specific features of that image repeatedly, the creator may indeed make original contributions to the final image. Professor Qian Wang posits that text cannot define the expressive elements in images. However, this view is not always accurate, particularly when prompts are so detailed that they precisely define numerous pixel-level features of an image. In fact, any digital image can be described through words and defined by a computer program. Program code is akin to written expression. Thus, it would be incorrect to assert that text-based prompts can never contribute to the expression in an image. However, as previously mentioned, in most cases, a single round of prompts does not result in original contributions to AI-generated images. I also agree with Professor Grimmelmann that a failure to protect any AIGC could lead to negative societal attitudes toward AI usage. We should not just assume that original works cannot be produced using AI tools. In the context of AI’s deep integration with commonplace tools like Photoshop, users are almost certain to make personalized modifications to the AIGC. In such cases, it becomes meaningless to emphasize that AIGC cannot be protected by copyright law.

 

Qian Wang: Firstly, in the “Théâtre d'Opéra Spatial” case, I have no objections to the idea that AIGC may become a protected work after being modified by individuals using Photoshop. The U.S. Copyright Office also does not state that AIGC processed through Photoshop cannot be registered. In fact, the Office only required the applicant to waive their rights to the pure AIGC, but the applicant refused, resulting in the eventual failure to register.

 

Secondly, regarding the formation of a copyrighted work through multiple rounds of modifications, I have conducted experiments with Stable Diffusion and MidJourney. I initially prompted them to draw Chinese-style girls, and the images generated by the two AIs were entirely different. Next, I asked them to add glasses to the girl, which both generative models accomplished. However, Stable Diffusion also added a third hand to the girl, making the image look quite unsettling. The crux of the matter lies in the third step. When I specifically requested to reduce the height of the girl’s glasses frame to 2/3 of the original, both AI systems failed to achieve this. This is because AI currently cannot comprehend user instructions as humans do. It can only generate new images based on its own training and algorithmic rules. Consequently, no matter how many rounds of modifications there are, the user lacks control over the content generated in each round. In other words, each round of AIGC remains a black box, and humans cannot predict the final outcome, regardless of the number of rounds.

 

Thirdly, Professor Guobin Cui suggested that an image can be fully described by dividing it into numerous grids on the screen and then detailing its features at the pixel level. I recall when I first learned computer science, I input the Mona Lisa into the computer. My method involved entering coordinates rather than describing the painting in human natural language. These numerical values were communicated solely to the computer, unlike the AI text-to-image scenario we are discussing today. If we were to use natural language to describe an image, it would never perfectly match the image generated by AI, no matter how detailed. The only exception would be if AI evolves to the level depicted in the movie “Inception,” where it can infiltrate the human brain and accurately replicate a completed image. However, in this situation, it should be referred to as “replicative AI” rather than “generative AI.” Moreover, in the U.S., despite AIGC not being protected by copyright law, the number of AI users has not decreased. This suggests that the protection of AIGC through copyright does not necessarily influence the willingness of users to use generative AI.

   

Angela Zhang: Judge Ge Zhu argues that granting copyright to AIGC can encourage people to use AI. However, if AI becomes the primary tool for creation, human original works may diminish, potentially leading to data scarcity. This is because training large models still relies on works created by humans. Studies have shown that feeding a large model only on AI-generated data can cause its performance to decline over time. In the long term, promoting human-created content is vital for both artistic creation and AI development. What are your thoughts on this?

 

Jason M. Schultz: To cultivate an ideal creative economy, we should raise the bar for originality in AIGC. Currently, it is too easy to produce content with generative AI, so people do not need additional rewards and incentives to use it. In some cases, this may stifle creativity. New laws and regulations should establish thresholds based on human creations, granting certain rights to the AIGC that meets specific standards. This approach would reserve space for human creativity while still acknowledging AI-generated works.

 

Guobin Cui: We must maintain our faith in art. Content generated solely by AI cannot rival the work of genuine artists. Encouraging artists to use AI is just the beginning; continuous refinement and improvement are necessary during the creative process to give the work a soul. If an artist cannot achieve this, their work, if replaced by AI, cannot be considered true art. In response to Professor Qian Wang’s skepticism about AI’s ability to modify specific features of selected images, such issues as the sudden appearance of a third hand or the inability to adjust the height of glasses are technical problems that can be resolved by users. If a third hand appears, it might be due to unconstrained prompts. If the height of glasses cannot be modified, it may be because the user didn’t add brackets after the keyword “glasses” to provide machine-readable prompts or didn’t install the appropriate plug-in. Stable Diffusion is an open-source software with numerous developers creating plug-ins that enable humans to modify specific content in images using text, graphic commands, or keyboard operations. The possibilities are vast, and as AI technology advances, the distinction between text-based and Photoshop button-based modification methods will become irrelevant. In the future, seamless integration between the two will be achieved, marking an inevitable trend. Professor Qian Wang’s example of AI’s inability to modify specific features of selected images does not prove that text prompts cannot specifically or completely modify an AI-generated image. It simply indicates that existing AI technology has not yet reached its full potential, or it may stem from users’ inaccurate understanding of the technical possibilities of AI plug-ins.

 

Ge Zhu: I’ll respond to the question of incentives. First, AI enables individuals without traditional artistic skills to enter the art market and showcase their creativity. Second, many artists have incorporated large AI models into their toolset, potentially replacing repetitive tasks in their workflow. Regarding the issue of value, people may prefer handmade items despite their higher cost. As AI-generated works become more common, handcrafted works will become increasingly scarce and valuable. In addition, there are currently some barriers to using AI software, and users should be encouraged to invest more time and effort into learning these tools. Lastly, according to the mainstream view in China, originality is an all-or-nothing matter. Based on existing standards for “works,” a significant number of AI-generated images can meet the “originality” requirement, as the focus is on human input.

 

Qian Wang: I’ll respond with three concise points. First, my previous example was not meant to insult AI, but to illustrate the practical application of AI technology. Second, when discussing “tools,” we refer to “creative tools,” not “tools” as in “workers as tools for capitalists to make money” or “AI as a tool for humans to transform the world.” A creative tool should not participate in the content creation decision-making process; otherwise, it cannot be considered a creative tool. Third, whether AIGC can be protected as a work is unrelated to whether originality is an all-or-nothing question or a matter of degree. Any analysis of originality and intellectual investment is meaningless without considering the idea/expression dichotomy. For instance, E=mc², despite Einstein’s intellectual input and originality, is not a work because it falls within the realm of an “idea.”

 

 - The End - 

 

Angela Zhang: Thank you to all our esteemed speakers for their insights, and to the audience for their participation! This event marks the beginning of the Generative AI Governance Summit Dialogue Series. We hope you will continue to support us in our future endeavors!

 

Linghan Zhang: Today’s discussion was incredibly engaging. I’d like to extend my gratitude to all the speakers for their valuable insights and stimulating conversations. I eagerly anticipate future summit dialogues being just as productive as today. Thank you all!

bottom of page