policy Archives - Creative Commons https://creativecommons.org/tag/policy/ Wed, 02 Jul 2025 14:43:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.5 Why CC Signals: An Update https://creativecommons.org/2025/07/02/why-cc-signals-an-update/?utm_source=rss&utm_medium=rss&utm_campaign=why-cc-signals-an-update Wed, 02 Jul 2025 14:43:26 +0000 https://creativecommons.org/?p=76821 CC Signals – An Update © 2025 by Creative Commons is licensed under CC BY 4.0 Thanks to everyone who attended our CC signals project kickoff last week. We’re receiving plenty of feedback, and we appreciate the insights. We are listening to all of it and hope that you continue to engage with us as…

The post Why CC Signals: An Update appeared first on Creative Commons.

]]>
CC Signals - An Update © 2025 by Creative Commons is licensed under CC BY 4.0
CC Signals – An Update © 2025 by Creative Commons is licensed under CC BY 4.0

Thanks to everyone who attended our CC signals project kickoff last week. We’re receiving plenty of feedback, and we appreciate the insights. We are listening to all of it and hope that you continue to engage with us as we seek to make this framework fit for purpose.

Some of the input focuses on the specifics of the CC signals proposal, offering constructive questions and suggesting ideas for improving CC signals in practice. The most salient type of feedback, however, is touching on something far deeper than the CC signals themselves – the fact that so much about AI seems to be happening to us all, rather than with or for us all, and that the expectations of creators and communities are at risk of being overshadowed by powerful interests.

This sentiment is not a surprise to us. We feel it, too. In fact, it is why we are doing this project. CC’s goal has always been to grow and sustain the thriving commons of knowledge and culture. We want people to be able to share with and learn from each other, without being or feeling exploited. CC signals is an extension of that mission in this evolving AI landscape.

We believe that the current practices of AI companies pose a threat to the future of the commons. Many creators and knowledge communities are feeling betrayed by how AI is being developed and deployed. The result is that people are understandably turning to enclosure. Eventually, we fear that people will no longer want to share publicly at all. 

CC signals are a first step to reduce this damage by giving more agency to those who create and hold content. Unlike the CC licenses, they are explicitly designed to signal expectations even where copyright law is silent or unclear, when it does not apply, and where it varies by jurisdiction. We have listened to creators who want to share their work but also have concerns about exploitation. CC signals provide a way for creators to express those nuances.  The CC signals build on top of developing standards for expressing AI usage preferences (e.g., via robots.txt). Creators who want to fully opt out of machine reuse do not need to use a CC signal. CC signals are for those who want to keep sharing, but with some terms attached.

The challenge we’re all facing in this age of AI is how to protect the integrity and vitality of the commons. The listening we’ve been doing so far, across creator communities and open knowledge networks, has led us here, to CC signals. Our shared commitment is to protect the commons so that it remains a space for human creativity, collaboration, and innovation, and to make clear our expectation that those who draw from it give something in return. 

Our goal is to advocate for reciprocity while upholding our values that knowledge and creativity should not be treated as commodities. 

Our goal is to find a path between a free-for-all and an internet of paywalls.

Copyright will not get us there. Nor should it. And we don’t think the boundaries of copyright tell us everything we need to know about navigating this moment. Just this week, Open Future released a report that calls for going beyond copyright in this debate, on the path to a healthy knowledge commons.

This is the beginning of the conversation, not the end. We are listening. From what we have heard, CC signals, or something like it, is the best practical mechanism to avoid the dual traps of total exploitation or total enclosure, both of which damage the commons. We have shared our current progress because we want to learn how to design it to meet your needs. We invite you to continue sharing feedback so we can shape CC signals together in a way that works for diverse communities.

In the months ahead, we’ll be providing more detail about how CC signals are developing, including key themes we are hearing, along with the questions we are exploring and our next steps.

The post Why CC Signals: An Update appeared first on Creative Commons.

]]>
Introducing CC Signals: A New Social Contract for the Age of AI https://creativecommons.org/2025/06/25/introducing-cc-signals-a-new-social-contract-for-the-age-of-ai/?utm_source=rss&utm_medium=rss&utm_campaign=introducing-cc-signals-a-new-social-contract-for-the-age-of-ai Wed, 25 Jun 2025 13:21:48 +0000 https://creativecommons.org/?p=76675 CC Signals © 2025 by Creative Commons is licensed under CC BY 4.0 Creative Commons (CC) today announces the public kickoff of the CC signals project, a new preference signals framework designed to increase reciprocity and sustain a creative commons in the age of AI. The development of CC signals represents a major step forward…

The post Introducing CC Signals: A New Social Contract for the Age of AI appeared first on Creative Commons.

]]>
CC Signals © 2025 by Creative Commons is licensed under CC BY 4.0
CC Signals © 2025 by Creative Commons is licensed under CC BY 4.0

Creative Commons (CC) today announces the public kickoff of the CC signals project, a new preference signals framework designed to increase reciprocity and sustain a creative commons in the age of AI. The development of CC signals represents a major step forward in building a more equitable, sustainable AI ecosystem rooted in shared benefits. This step is the culmination of years of consultation and analysis. As we enter this new phase of work, we are actively seeking input from the public. 

As artificial intelligence (AI) transforms how knowledge is created, shared, and reused, we are at a fork in the road that will define the future of access to knowledge and shared creativity. One path leads to data extraction and the erosion of openness; the other leads to a walled-off internet guarded by paywalls. CC signals offer another way, grounded in the nuanced values of the commons expressed by the collective.

Based on the same principles that gave rise to the CC licenses and tens of billions of works openly licensed online, CC signals will allow dataset holders to signal their preferences for how their content can be reused by machines based on a set of limited but meaningful options shaped in the public interest. They are both a technical and legal tool and a social proposition: a call for a new pact between those who share data and those who use it to train AI models.

“CC signals are designed to sustain the commons in the age of AI,” said Anna Tumadóttir, CEO, Creative Commons. “Just as the CC licenses helped build the open web, we believe CC signals will help shape an open AI ecosystem grounded in reciprocity.”

CC signals recognize that change requires systems-level coordination. They are tools that will be built for machine and human readability, and are flexible across legal, technical, and normative contexts. However, at their core CC signals are anchored in mobilizing the power of the collective. While CC signals may range in enforceability, legally binding in some cases and normative in others, their application will always carry ethical weight that says we give, we take, we give again, and we are all in this together. 

“If we are committed to a future where knowledge remains open, we need to collectively insist on a new kind of give-and-take,” said Sarah Hinchliff Pearson, General Counsel, Creative Commons. “A single preference, uniquely expressed, is inconsequential in the machine age. But together, we can demand a different way.”

Now Ready for Feedback 

More information about CC signals and early design decisions are available on the CC website. We are committed to developing CC signals transparently and alongside our partners and community. We are actively seeking public feedback and input over the next few months as we work toward an alpha launch in November 2025. 

Get Involved

Join the discussion & share your feedback

To give feedback on the current CC signals proposal, hop over to the CC signals GitHub repository. You can engage in a few ways: 

  1. Read about the technical implementation of CC signals
  2. Join the discussion to share feedback about the CC signals project
  3. Submit an issue for any suggested direct edits

Attend a CC signals town hall

We invite our community to join us for a brief explanation of the CC signals framework, and then we will open the floor to you to share feedback and ask questions. 

Tuesday, July 15
6–7 PM UTC
Register here.

Tuesday, July 29
1–2 PM UTC
Register here.

Friday, Aug 15
3–4 PM UTC
Register here. 

Support the movement

CC is a nonprofit. Help us build CC signals with a donation

The age of AI demands new tools, new norms, and new forms of cooperation. With CC signals, we’re building a future where shared knowledge continues to thrive. Join us.

The post Introducing CC Signals: A New Social Contract for the Age of AI appeared first on Creative Commons.

]]>
Why Digital Public Goods, including AI, Should Depend on Open Data https://creativecommons.org/2025/01/27/why-digital-public-goods-including-ai-should-depend-on-open-data/?utm_source=rss&utm_medium=rss&utm_campaign=why-digital-public-goods-including-ai-should-depend-on-open-data Mon, 27 Jan 2025 17:34:43 +0000 https://creativecommons.org/?p=75806 Acknowledging that some data should not be shared (for moral, ethical and/or privacy reasons) and some cannot be shared (for legal or other reasons), Creative Commons (CC) thinks there is value in incentivizing the creation, sharing, and use of open data to advance knowledge production. As open communities continue to imagine, design, and build digital…

The post Why Digital Public Goods, including AI, Should Depend on Open Data appeared first on Creative Commons.

]]>
Acknowledging that some data should not be shared (for moral, ethical and/or privacy reasons) and some cannot be shared (for legal or other reasons), Creative Commons (CC) thinks there is value in incentivizing the creation, sharing, and use of open data to advance knowledge production. As open communities continue to imagine, design, and build digital public goods and public infrastructure services for education, science, and culture, these goods and services – whenever possible and appropriate – should produce, share, and/or build upon open data.

Open Data by Auregann is licensed under CC BY-SA 3.0.

Open Data and Digital Public Goods (DPGs)

CC is a member of the Digital Public Goods Alliance (DPGA) and CC’s legal tools have been recognized as digital public goods (DPGs). DPGs are “open-source software, open standards, open data, open AI systems, and open content collections that adhere to privacy and other applicable best practices, do no harm, and are of high relevance for attainment of the United Nations 2030 Sustainable Development Goals (SDGs).” If we want to solve the world’s greatest challenges, governments and other funders will need to invest in, develop, openly license, share, and use DPGs.

Open data is important to DPGs because data is a key driver of economic vitality with demonstrated potential to serve the public good. In the public sector, data informs policy making and public services delivery by helping to channel scarce resources to those most in need; providing the means to hold governments accountable and foster social innovation. In short, data has the potential to improve people’s lives. When data is closed or otherwise unavailable, the public does not accrue these benefits.

CC was recently part of a DPGA sub-committee working to preserve the integrity of open data as part of the DPG Standard. This important update to the DPG Standard was introduced to ensure only open datasets and content collections with open licenses are eligible for recognition as DPGs. This new requirement means open data sets and content collections must meet the following criteria to be recognised as a digital public good.

  1. Comprehensive Open Licensing:
    1. The entire data set/content collection must be under an acceptable open licence. Mixed-licensed collections will no longer be accepted.
  2. Accessible and Discoverable:
    1. All data sets and content collection DPGs must be openly licensed and easily accessible from a distinct, single location, such as a unique URL.
  3. Permitted Access Restrictions:
    1. Certain access restrictions – such as logins, registrations, API keys, and throttling – are permitted as long as they do not discriminate against users or restrict usage based on geography or any other factors.

The DPGA writes: “This new requirement is designed to increase trust and confidence in all DPGs by ensuring that users can fully engage with solutions without concerns over intellectual property infringement. Simplifying access and usage aligns with the DPGA’s goal of making DPGs truly open and accessible for widespread adoption… it helps foster an environment and ecosystem where innovation can thrive without legal uncertainties.”

AI and Open Data

As CC examines AI and its potential to be a public good that helps solve global challenges, we believe open data will play a similarly important role.

CC recognizes AI is a rapidly developing space, and we appreciate everyone’s diligent work to create definitions, recommendations, and guidance for and warnings about AI. After two years of community consultation, the Open Source Initiative released version 1.0 of the Open Source AI Definition (OSAID) on October 28, 2024. This definition is an important step in starting the conversation about what open means for AI systems. However, the OSAID’s data sharing requirements remain contentious, particularly around whether and how training data for AI models should be shared.

CC is of the opinion that just because it is difficult to build and release open datasets, that does not mean we should not encourage it. In cases where training data should not or cannot be shared, we encourage detailed summaries that explain the contents of the dataset and give instructions for reproducibility, but nonetheless that data should be defined as closed. When data can be made open and shared, it should be.

We agree with Liv Marte Nordhaug, CEO, Digital Public Goods Alliance who said in a recent post: “With regards to AI systems, there is a need to ensure that we don’t inadvertently undermine the open data movement and open data as a category of DPGs by advancing an approach to AI systems that is more permissive than for other categories of DPGs. Maintaining a high bar on training data could potentially result in fewer AI systems meeting the DPG Standard criteria. However, SDG relevance, platform independence, and do-no-harm by design are features that set DPGs apart from other open source solutions—and for those reasons, the inclusion of [AI] training data is needed.”

Next Steps

CC will continue to work with the DPGA, and other partners, as it develops a standard as to what qualifies an AI model to be a digital public good. In that arena we will advocate for open datasets, and consideration of a tiered approach, so that components of an AI model can be considered digital public goods, without the entire model needing to have every component openly shared. Updated recommendations and guidelines that recognize the value of fully open AI systems that use and share open datasets will be an important part of ensuring AI serves the public good.


¹Digital Public Goods Standard
²Data for Better Lives. World Bank (2021). CC BY 3.0 IGO

The post Why Digital Public Goods, including AI, Should Depend on Open Data appeared first on Creative Commons.

]]>
An Invitation for Creators, Activists, and Stewards of the Open Movement https://creativecommons.org/2024/02/11/an-invitation-for-creators-activists-and-stewards-of-the-open-movement/?utm_source=rss&utm_medium=rss&utm_campaign=an-invitation-for-creators-activists-and-stewards-of-the-open-movement Sun, 11 Feb 2024 12:00:52 +0000 https://creativecommons.org/?p=74676 Dear Open Movement Creators, Activists, and Stewards,  A key question facing Creative Commons as an organization, and the open movement in general, is how we will respond to the challenge of shaping artificial intelligence (AI) towards the public interest, growing and sustaining a thriving commons of shared knowledge and culture. So much of generative AI…

The post An Invitation for Creators, Activists, and Stewards of the Open Movement appeared first on Creative Commons.

]]>
Dear Open Movement Creators, Activists, and Stewards, 

A key question facing Creative Commons as an organization, and the open movement in general, is how we will respond to the challenge of shaping artificial intelligence (AI) towards the public interest, growing and sustaining a thriving commons of shared knowledge and culture.

So much of generative AI is built on the digital infrastructure of the commons and uses the vast quantity of images, text, video, and rich data resources of the internet. Organizations train their models with trillions of tokens from publicly available datasets like CommonCrawl, GitHub open source projects, Wikipedia, and ArXiV.

Access to the commons has enabled incredible innovations while creating the conditions for the concentration of power in entities that are able to amass the immense energy and data needed to train AI models. Community consultations at conferences like MozFest, RightsCon, Wikimania, and the CC Global Summit have also revealed concerns about transparency, bias, fairness, and attribution in AI.

Alignment Assembly

To start addressing some of these challenges, between 13 February and 15 March, Open Future will host an asynchronous, virtual alignment assembly for the open movement to explore principles and considerations for regulating generative AI. We hope to reach participants spread across different fields of open and coming from different regions of the world. We are organizing the assembly in partnership with Open Future and Fundación Karisma.

We want to bring to the conversation the perspectives of:

  • Activists and experts, including digital rights advocates and legal experts
  • Stewards: people from organizations that steward collections that are part of the digital commons such as Wikimedia, open access repositories, and cultural heritage collections
  • Creators: people who create works that form part of the digital commons, broadly: not only visual artists and musicians but also researchers who do open science or open source programmers

We will use the process of an alignment assembly, an experiment in collective deliberation and decision-making. This model is pioneered by the Collective Intelligence Project (CIP), led by Divya Siddarth and Saffron Huang. The model has been used by OpenAI, Anthropic, and the government of Taiwan.

You can sign up to take part in the process by registering your interest here (we will only use the contact information to invite you to the assembly and to provide updates and delete it once the assembly process is complete).

Background

Creative Commons has long been considering the intersection of copyright and AI. CC submitted comments to the World Intellectual Property Organization’s consultations on copyright and AI in 2020. When considering usage of CC-licensed work in AI, the organization explored in 2021 “Should CC-licensed work be used to train AI”. More recently, CC carried out consultations at MozFest, RightsCon, Wikimania, and the CC Global Summit, while publishing ongoing analysis of the AI landscape.

Ahead of the Creative Commons Global Summit last year, Creative Commons and Open Future hosted a workshop on generative AI and its impact on the commons. The group agreed and released a set of principles on “Making AI work for Creators and the Commons.” Now, we would like to test and expand this work. 

Outcome

The Alignment Assembly on AI and the Commons builds on and continues all of this work.

We treat the principles as a starting point. We are using the alignment assembly methodology and the pol.is tool to understand where there is consensus and which principles generate controversy. In particular, how much alignment there is between the perspectives of activists, creators, and stewards of the commons.

At the end of the process, we will produce a report with the outcomes of the assembly and a proposal for a refined set of principles. As the policy debate about the commons and AI develops, we hope the assembly will provide insights into better regulation of generative AI.

Sign up here to share your thoughts on regulating generative AI.

The post An Invitation for Creators, Activists, and Stewards of the Open Movement appeared first on Creative Commons.

]]>
What does the CC Community Think about Regulating Generative AI? https://creativecommons.org/2024/02/08/what-does-the-cc-community-think-about-regulating-generative-ai/?utm_source=rss&utm_medium=rss&utm_campaign=what-does-the-cc-community-think-about-regulating-generative-ai Thu, 08 Feb 2024 12:00:31 +0000 https://creativecommons.org/?p=74666 In the past year, Creative Commons, alongside other members of the Movement for a Better Internet, hosted workshops and sessions at community conferences like MozFest, RightsCon, and Wikimania, to hear from attendees regarding their views on artificial intelligence (AI). In these sessions, community members raised concerns about how AI is utilizing CC-licensed content, and discussions…

The post What does the CC Community Think about Regulating Generative AI? appeared first on Creative Commons.

]]>
In the past year, Creative Commons, alongside other members of the Movement for a Better Internet, hosted workshops and sessions at community conferences like MozFest, RightsCon, and Wikimania, to hear from attendees regarding their views on artificial intelligence (AI). In these sessions, community members raised concerns about how AI is utilizing CC-licensed content, and discussions touched on issues like transparency, bias, fairness, and proper attribution. Some creators worry that their work is being used to train AI systems without proper credit or consent, and some have asked for clearer guidelines around public benefit and reciprocity. 

In 2023, the theme of the CC Global Summit was AI and the Commons, focused on supporting better sharing in a world with artificial intelligence — sharing that is contextual, inclusive, just, equitable, reciprocal, and sustainable. A team including CC General Counsel Kat Walsh, Director of Communications & Community Nate Angell, Director of Technology Timid Robot, and Tech Ethics Consultant Shannon Hong collaborated to use alignment assembly practices to engage the Summit community in thinking through a complex question: how should Creative Commons respond to the use of CC-licensed work in AI training? The team identified concerns CC should consider in relation to works used in AI training and mapped out possible practical interventions CC might pursue to ensure a thriving commons in a world with AI.

At the Summit, we engaged participants in an Alignment Assembly using Pol.is, an open-source, real-time survey platform, for input and voting. 25 people voted using the Pol.is, and in total 604 votes were cast on over 33 statements, with an average of 24 votes per voter. This included both pre-written seed statements and ideas suggested by participants.

The one thing everyone agreed on wholeheartedly: CC should NOT stay out of the AI debate. All attendees disagreed with the statement: “CC should not engage with AI or AI policy.” 

Pol.is aggregates the votes and divides participants into opinion groups. Opinion groups are made of participants who voted similarly to each other, and differently from other groups. There were three opinion groups that resulted from this conversation.

Group A: Moat Protectors

Group A comprises 16% of participants and is characterized by a desire to focus on Creative Commons’ current expertise, specifically some relevant advocacy and the development of preference signaling. They uniquely support noncommercial public interest AI training, unlike B and C. This group is uniquely against additional changes like model licenses and strongly against political lobbying in the US.

Group B: AI Oversight Maximalists

Group B, the largest group with 36% of participants, strongly supports Creative Commons taking all actions possible to create oversight in AI, including new political lobbying actions or collaborations, AI teaching resources, model licenses, attribution laws, and preference signaling. This group uniquely supports political lobbying and new regulatory bodies.

Group C: Equitable Benefit Seekers

Group C, containing 32% of participants, is focused on protecting traditional knowledge, preserving the ability to choose where works can be used, and prioritizing equitable benefit from AI. This group strongly supports requiring authorization for using traditional knowledge in AI training and sharing the benefits of profits derived from the commons. Like group A, this group is against political lobbying in the US.

There are two key limitations of this assembly: participant sample size and participant representativeness. There are over 22,000 members in the Creative Commons slack community, which is only a subset of the many more members of the CC community more broadly. 30 people were present and active voting members of the assembly. While many participants were open movement leaders in their countries and represented the perspectives of more individuals, this sample is too small to have a complete picture of the CC community’s desires. We did not perform a demographic survey of the room, but data from the overall conference suggests that American and European perspectives may be overrepresented in our assembly. 

Want to learn more about the specific takeaways? Read the full report.

We invite CC members to participate in the next alignment assembly, hosted by Open Future.  Sign up and learn more here. 

The post What does the CC Community Think about Regulating Generative AI? appeared first on Creative Commons.

]]>
CC’s Key Insights from WIPO’s Meeting on Copyright https://creativecommons.org/2023/11/09/cc-key-insights-wipo-meeting-on-copyright/?utm_source=rss&utm_medium=rss&utm_campaign=cc-key-insights-wipo-meeting-on-copyright Thu, 09 Nov 2023 17:32:36 +0000 https://creativecommons.org/?p=74254 From 6 to 8 November 2023, Creative Commons participated remotely in the 44th session of the World Intellectual Property Organization Standing Committee on Copyright and Related Rights. In this blog post, we look back on the session’s highlights on broadcasting, exceptions and limitations, and generative AI, from CC’s perspective.

The post CC’s Key Insights from WIPO’s Meeting on Copyright appeared first on Creative Commons.

]]>
From 6 to 8 November 2023, Creative Commons (CC) participated remotely in the 44th session of the World Intellectual Property Organization (WIPO) Standing Committee on Copyright and Related Rights (SCCR). In this blog post, we look back on the session’s highlights on broadcasting, exceptions and limitations, and generative AI, from CC’s perspective.

As in previous sessions, our main objective was to drive copyright reform towards better sharing of copyright content in the public interest and in tune with the sharing possibilities of the digital environment. In this short session, we addressed the proposed broadcasting treaty and exceptions and limitations in our opening statement, as reported in the​​ “Statements” information document (SCCR/44/INF/STATEMENTS).

We also offered views on exceptions and limitations for cultural heritage institutions, i.e. libraries, archives and museums; you can watch our intervention on the WIPO webcast. These views are in line with our Open Culture Program’s recently launched initiative Towards a Recommendation on Open Culture (TAROC) which aims to develop policy to recognize the role of open culture to reach wider policy goals notably in relation to copyright and access and use of cultural heritage — see our TAROC Two-Pager in English, Shqip, français, Español, 日本語, Türkçe, italiano, عربي.

Overall, we are rather satisfied with the session’s outcomes. On broadcasting, we remain concerned that discussions on the draft broadcasting treaty are being maintained on the agenda despite evidence of a clear stalemate in the discussions; we are nonetheless heartened by the acknowledged need to work towards a balanced approach on exceptions and limitations in the draft treaty.

On exceptions and limitations, we are pleased that the SCCR Secretariat has undertaken to prepare a detailed implementation plan for the Work Program on Exceptions and Limitations; in CC’s views, this plan should provide for open and transparent engagement opportunities and wide participation from civil society of which CC is a leading voice. It should notably allow for real progress on substantive issues to support meaningful access and use of cultural heritage for preservation and other legitimate purposes.

We also welcome the organization of a virtual panel discussion on cross-border uses of copyright works in the educational and research sectors open to all member states as well as observers. As an accredited observer, CC places high value on broad and inclusive participation to ensure balanced and diverse perspectives can be brought to the table for a constructive debate. We recall that licensing falls short of addressing the problems that libraries, museums, archives, educational and research institutions, as well as persons with disabilities, face on a daily basis. Licensing is not a substitute for robust, flexible, mandatory exceptions and limitations to empower those who teach, learn and research, those who share in and build upon cultural heritage, and people with disabilities.

We note Group B’s Proposal Information Session on Generative AI and Copyright (SCCR/44/8) and look forward to the Secretariat organizing an open, inclusive, and balanced session at the next SCCR under the item of Copyright in the Digital Environment. As we have stated at the WIPO Conversation on Generative AI and Intellectual Property last September, generative AI raises important issues and is having an enormous impact on creativity, the commons, and better sharing, i.e., sharing that is inclusive, equitable, reciprocal, and sustainable. Our consultations on the matter have revealed a wide variety of views among creators, AI developers, and other stakeholders in the commons. They have also shed light on the fact that copyright is but one lens through which to consider generative AI; what is more, it is a rather blunt tool that often leads to black-and-white solutions that fall short of harnessing all the diverse possibilities that generative AI offers for human creativity. Our interventions on copyright and generative AI in the United States and the European Union contexts attest to those nuanced views. We thus call on the Secretariat to ensure the session will offer a balanced and representative set of perspectives.

We look forward to participating in the Committee’s next session, to take place from April 15 to 19, 2024, and to bring our expertise on copyright, better sharing of cultural heritage, and generative AI in order to help create a fairer and more balanced international copyright system in the public interest.

→ To stay informed about our policy and open culture work:

Sign up for our Open Culture Matters newsletter >

The post CC’s Key Insights from WIPO’s Meeting on Copyright appeared first on Creative Commons.

]]>
Maarten Zeinstra — Open Culture VOICES, Season 2 Episode 31 https://creativecommons.org/2023/10/31/maarten-zeinstra-open-culture-voices-season-2-episode-31/?utm_source=rss&utm_medium=rss&utm_campaign=maarten-zeinstra-open-culture-voices-season-2-episode-31 Tue, 31 Oct 2023 05:00:31 +0000 https://creativecommons.org/?p=67470   Maarten believes that “Open GLAM is a necessity of a disbalanced copyright framework.” Maarten talks about how open access policies help institutions achieve their public missions. Open access policies in instutions provides good evidence that society and communities need access to cultural heritage to flourish. Open Culture VOICES is a series of short videos…

The post Maarten Zeinstra — Open Culture VOICES, Season 2 Episode 31 appeared first on Creative Commons.

]]>

 

Maarten believes that “Open GLAM is a necessity of a disbalanced copyright framework.” Maarten talks about how open access policies help institutions achieve their public missions. Open access policies in instutions provides good evidence that society and communities need access to cultural heritage to flourish.

Open Culture VOICES is a series of short videos that highlight the benefits and barriers of open culture as well as inspiration and advice on the subject of opening up cultural heritage. Maarten is an independent consultant and intellectual property lawyer who works with GLAM institutions on open access policies and implementing open information management systems.

Maarten responds to the following questions:

  1. What are the main benefits of open GLAM?
  2. What are the barriers?
  3. Could you share something someone else told you that opened up your eyes and mind about open GLAM?
  4. Do you have a personal message to those hesitating to open up collections?

Closed captions are available for this video, you can turn them on by clicking the CC icon at the bottom of the video. A red line will appear under the icon when closed captions have been enabled. Closed captions may be affected by Internet connectivity — if you experience a lag, we recommend watching the videos directly on YouTube.

Want to hear more insights from Open Culture experts from around the world? Watch more episodes of Open Culture VOICES here >>

The post Maarten Zeinstra — Open Culture VOICES, Season 2 Episode 31 appeared first on Creative Commons.

]]>
An Open Letter from Artists Using Generative AI https://creativecommons.org/2023/09/07/an-open-letter-from-artists-using-generative-ai/?utm_source=rss&utm_medium=rss&utm_campaign=an-open-letter-from-artists-using-generative-ai Thu, 07 Sep 2023 17:00:57 +0000 https://creativecommons.org/?p=67848 As part of Creative Commons’ ongoing community consultation on generative AI, CC has engaged with a wide variety of stakeholders, including artists and content creators, about how to help make generative AI work better for everyone. Certainly, many artists have significant concerns about AI, and we continue to explore the many ways they might be…

The post An Open Letter from Artists Using Generative AI appeared first on Creative Commons.

]]>
A bluish surrealist painting generated by the DALL-E 2 AI platform showing a small grayish human figure holding a gift out to a larger robot that has its arms extended and a head like a cello.

Better Sharing With AI” by Creative Commons was generated by the DALL-E 2 AI platform with the text prompt “A surrealist painting in the style of Salvador Dali of a robot giving a gift to a person playing a cello.” CC dedicates any rights it holds to the image to the public domain via CC0.

As part of Creative Commons’ ongoing community consultation on generative AI, CC has engaged with a wide variety of stakeholders, including artists and content creators, about how to help make generative AI work better for everyone.

Certainly, many artists have significant concerns about AI, and we continue to explore the many ways they might be addressed. Just last week, we highlighted the useful roles that could be played by new tools to signal whether an artist approves of use of their works for AI training.

At the same time, artists are not homogenous, and many others are benefiting from this new technology. Unfortunately, the debate about generative AI has too often become polarized and destructive, with artists who use AI facing harassment and even death threats. As part of the consultation, we also explored how to surface these artists’ experiences and views.

Today, we’re publishing an open letter from over 70 artists who use generative AI. It grew from conversations with an initial cohort of the full signatory list, and we hope it can help foster inclusive, informed discussions.

Signed by artists like Nettrice Gaskins, dadabots, Rob Sheridan, Charlie Engman, Tim Boucher, illustrata, makeitrad, Jrdsctt, Thomas K. Yonge, BLAC.ai, Deltasauce, and Cristóbal Valenzuela, the letter reads in part:

“We write this letter today as professional artists using generative AI tools to help us put soul in our work. Our creative processes with AI tools stretch back for years, or in the case of simpler AI tools such as in music production software, for decades. Many of us are artists who have dedicated our lives to studying in traditional mediums while dreaming of generative AI’s capabilities. For others, generative AI is making art more accessible or allowing them to pioneer entirely new artistic mediums. Just like previous innovations, these tools lower barriers in creating art—a career that has been traditionally limited to those with considerable financial means, abled bodies, and the right social connections.”

Read the full letter and list of signatories. If you would like to have your name added to this list and are interested in follow-up actions with this group, please sign our form. You can share the letter with this shorter link: creativecommons.org/artistsailetter

While the policy issues here are globally relevant, the letter is addressed to Senator Chuck Schumer and the US Congress in light of ongoing hearings and “Insight Fora” on AI hosted in the USA. Next week, Schumer is hosting one of these Fora, but the attendees are primarily from tech companies; the Motion Picture Association of America and the Writers Guild of America are invited, but there are no artists using generative AI specifically.

We also invited artists to share additional perspectives with us, some of which we’re publishing here:

Nettrice Gaskins said: “Generative AI imaging is a continuation of creative practices I learned as a college student, in my computer graphics courses. It’s the way of the future, made accessible to us in the present, so don’t throw the baby out with the bathwater.”

Elizabeth Ann West said: “Generative AI has allowed me to make a living wage again with my writing, allowing me to get words on the page even when mental and chronic health conditions made doing so nearly impossible. I published 3 books the first year I had access to Davinci 3. Generative AI allows me to work faster and better for my readers.”

JosephC said: “There must be room for nuance in the ongoing discussion about machine-generated content, and I feel that the context vacuum of online discourse has made it impossible to talk and be heard when it comes to the important details of consent, the implications of regulation, and the prospects of making lives better. We need to ensure that consenting creatives can see their work become part of something greater, we need to ensure pioneering artists are free to express themselves in the medium that gives them voice, and we need to be mindful of the wishes of artists who desire to have their influence only touch the eyes and ears and minds of select other humans in the way they want. Opportunities abound; let us work together to realize them.”

Tim Simpson said: “Generative AI is the photography of this century. It’s an incredible new medium that has immense potential to be leveraged by artists, particularly indie artists, to pursue artistic visions that would have been completely infeasible for solo artists and small teams just a year ago. Open source AI tools are immensely important to the development of this medium and making sure that it remains available to the average person instead of being walled off into monopolized corporate silos. Many of the regulatory schemes that are being proposed today jeopardize that potential, and I strongly urge congress to support measures that keep these tools open and freely available to all.”

Rob Sheridan said: “As a 25 year professional artist and art director, I’ve adapted to many shifts in the creative industry, and see no reason to panic with regards to AI art technology itself….I fully understand and appreciate the concerns that artists have about AI art tools. With ANY new technology that automates human labor, we unfortunately live under a predatory capitalism where corporations are incentivized to ruthlessly cut human costs any way they can, and they’ve made no effort to hide their intentions with AI (how many of those intentions are realistic and how many are products of an AI hype bubble is a different conversation). But this is a systemic problem that goes well beyond artists; a problem that didn’t begin with AI, and won’t end with AI. Every type of workforce in America is facing this problem, and the solutions lie in labor organizing and in uniting across industries for major systemic changes like universal healthcare and universal guaranteed income. Banning or over-regulating AI art tools might plug one small hole in the leaky dam of corporate exploitation, but it closes a huge potential doorway for small creators and businesses.”

The post An Open Letter from Artists Using Generative AI appeared first on Creative Commons.

]]>
Exploring Preference Signals for AI Training https://creativecommons.org/2023/08/31/exploring-preference-signals-for-ai-training/?utm_source=rss&utm_medium=rss&utm_campaign=exploring-preference-signals-for-ai-training Thu, 31 Aug 2023 22:59:39 +0000 https://creativecommons.org/?p=67798 One of the motivations for founding Creative Commons (CC) was offering more choices for people who wish to share their works openly. Through engagement with a wide variety of stakeholders, we heard frustrations with the “all or nothing” choices they seemed to face with copyright. Instead they wanted to let the public share and reuse…

The post Exploring Preference Signals for AI Training appeared first on Creative Commons.

]]>
Close up photo of three round metal signs lying haphazardly on a stony path, each with a big white arrow pointing in a different direction, embossed on a greenish-blue background.

Choices” by Derek Bruff, here cropped, licensed via CC BY-NC 2.0.

One of the motivations for founding Creative Commons (CC) was offering more choices for people who wish to share their works openly. Through engagement with a wide variety of stakeholders, we heard frustrations with the “all or nothing” choices they seemed to face with copyright. Instead they wanted to let the public share and reuse their works in some ways but not others. We also were motivated to create the CC licenses to support people — artists, technology developers, archivists, researchers, and more — who wished to re-use creative material with clear, easy-to-understand permissions.

What’s more, our engagement revealed that people were motivated to share not merely to serve their own individual interests, but rather because of a sense of societal interest. Many wanted to support and expand the body of knowledge and creativity that people could access and build upon — that is, the commons. Creativity depends on a thriving commons, and expanding choice was a means to that end.

Similar themes came through in our community consultations on generative artificial intelligence (AI*). Obviously, the details of AI and technology in society in 2023 are different from 2002. But the challenges of an all-or-nothing system where works are either open to all uses, including AI training, or entirely closed, are a through-line. So, too, is the desire to do so in a way that supports creativity, collaboration, and the commons.

One option that was continually raised was preference signaling: a way of making requests about some uses, not enforceable through the licenses, but an indication of the creators’ wishes. We agree that this is an important area of exploration. Preference signals raise a number of tricky questions, including how to ensure they are a part of a comprehensive approach to supporting a thriving commons — as opposed to merely a way to limit particular ways people build on existing works, and whether that approach is compatible with the intent of open licensing. At the same time, we do see potential for them to help facilitate better sharing.

What We Learned: Broad Stakeholder Interest in Preference Signals

In our recent posts about our community consultations on generative AI, we have highlighted the wide range of views in our community about generative AI.

Some people are using generative AI to create new works. Others believe it will interfere with their ability to create, share, and earn compensation, and they object to current ways AI is trained on their works without express permission.

While many artists and content creators want clearer ways to signal their preferences for use of their works to train generative AI, their preferences vary. Between the poles of “all” and “nothing,” there were gradations based on how generative AI was used specifically. For instance, they varied based on whether generative AI is used

  • to edit a new creative work (similar to the way one might use Photoshop or another editing program to alter an image),
  • to create content in the same category of the works it was trained on (i.e., using pictures to generate new pictures),
  • to mimic a particular person or replace their work generally, or
  • to mimic a particular person and replace their work to commercially pass themselves off as the artist (as opposed to doing a non-commercial homage, or a parody).

Views also varied based on who created and used the AI — whether researchers, nonprofits, or companies, for instance.

Many technology developers and users of AI systems also shared interest in defining better ways to respect creators’ wishes. Put simply, if they could get a clear signal of the creators’ intent with respect to AI training, then they would readily follow it. While they expressed concerns about over-broad requirements, the issue was not all-or-nothing.

Preference Signals: An Ambiguous Relationship to a Thriving Commons

While there was broad interest in better preference signals, there was no clear consensus on how to put them into practice. In fact, there is some tension and some ambiguity when it comes to how these signals could impact the commons.

For example, people brought up how generative AI may impact publishing on the Web. For some, concerns about AI training meant that they would no longer be sharing their works publicly on the Web. Similarly, some were specifically concerned about how this would impact openly licensed content and public interest initiatives; if people can use ChatGPT to get answers gleaned from Wikipedia without ever visiting Wikipedia, will Wikipedia’s commons of information continue to be sustainable?

From this vantage point, the introduction of preference signals could be seen as a way to sustain and support sharing of material that might otherwise not be shared, allowing new ways to reconcile these tensions.

On the other hand, if preference signals are broadly deployed just to limit this use, it could be a net loss for the commons. These signals may be used in a way that is overly limiting to expression — such as limiting the ability to create art that is inspired by a particular artist or genre, or the ability to get answers from AI systems that draw upon significant areas of human knowledge.

Additionally, CC licenses have resisted restrictions on use, in the same manner as open source software licenses. Such restrictions are often so broad that they cut off many valuable, pro-commons uses in addition to the undesirable uses; generally the possibility of the less desirable uses is a tradeoff for the opportunities opened up by the good ones. If CC is endorsing restrictions in this way we must be clear that our preference is a “commons first” approach.

This tension is not easily reconcilable. Instead, it suggests that preference signals are by themselves not sufficient to help sustain the commons, and should be explored as only a piece of a broader set of paths forward.

Existing Preference Signal Efforts

So far, this post has spoken about preference signals in the abstract, but it’s important to note that there are already many initiatives underway on this topic.

For instance, Spawning.ai has worked on tools to help artists find if their works are contained in the popular LAION-5B dataset, and decide whether or not they want to exclude them. They’ve also created an API that enables AI developers to interoperate with their lists; StabilityAI has already started accepting and incorporating these signals into the data they used to train their tools, respecting artists’ explicit opt-ins and opt-outs. Eligible datasets hosted on the popular site Hugging Face also now show a data report powered by Spawning’s API, informing model trainers what data has been opted out and how to remove it. For web publishers, they’ve also been working on a generator for “ai.txt” files that signals restrictions or permissions for the use of a site’s content for commercial AI training, similar to robots.txt.

There are many other efforts exploring similar ideas. For instance, a group of publishers within the World Wide Web Consortium (W3C) is working on a standard by which websites can express their preferences with respect to text and data mining. The EU’s copyright law expressly allows people to opt-out from text and data mining through machine-readable formats, and the idea is that the standard would fulfill that purpose. Adobe has created a “Do Not Train” metadata tag for works generated with some of its tools, Google has announced work to build an approach similar to robots.txt, and OpenAI has provided a means for sites to exclude themselves from crawling for future versions of GPT.

Challenges and Questions in Implementing Preference Signals

These efforts are still in relatively early stages, and they raise a number of challenges and questions. To name just a few:

  • Ease-of-Use and Adoption: For preference signals to be effective, they must be easy for content creators and follow-on users to make use of. How can solutions be ease-to-use, scalable, and accommodate different types of works, uses, and users?
  • Authenticating Choices: How best to validate and trust that a signal has been put in place by the appropriate party? Relatedly, who should be able to set the preferences — the rightsholder for the work, the artist who originally created it, both?
  • Granular Choices for Artists: So far, most efforts have been focused on enabling people to opt-out of use for AI training. But as we note above, people have a wide variety of preferences, and preference signals should also be a way for people to signal that they are OK with their works being used, too. How might signals strike the right balance, enabling people to express granular preferences, but without becoming too cumbersome
  • Tailoring and Flexibility Based on Types of Works and Users: We’ve focused in this post on artists, but there are of course a wide variety of types of creators and works. How can preference signals accommodate scientific research, for instance? In the context of indexing websites, commercial search engines generally follow the robots.txt protocol, although institutions like archivists and cultural heritage organizations may still crawl to fulfill their public interest missions. How might we facilitate similar sorts of norms around AI?

As efforts to build preference signals continue, we will continue to explore these and other questions in hopes of informing useful paths forward. Moreover, we will also continue to explore other mechanisms necessary to help support sharing and the commons. CC is committed to more deeply engaging in this subject, including at our Summit in October, whose theme is “AI and the Commons.”

If you are in  New York City on 13 September 2023, join our symposium on Generative AI & the Creativity Cycle, which focuses on the intersection of generative artificial intelligence, cultural heritage, and contemporary creativity. If you miss the live gathering, look for the recorded sessions.

The post Exploring Preference Signals for AI Training appeared first on Creative Commons.

]]>
Understanding CC Licenses and Generative AI https://creativecommons.org/2023/08/18/understanding-cc-licenses-and-generative-ai/?utm_source=rss&utm_medium=rss&utm_campaign=understanding-cc-licenses-and-generative-ai Fri, 18 Aug 2023 19:07:55 +0000 https://creativecommons.org/?p=67737 Many wonder what role CC licenses, and CC as an organization, can and should play in the future of generative AI. The legal and ethical uncertainty over using copyrighted inputs for training, the uncertainty over the legal status and best practices around works produced by generative AI, and the implications for this technology on the…

The post Understanding CC Licenses and Generative AI appeared first on Creative Commons.

]]>
A black and white illustration of a group of human figures in silhouette using unrecognizable tools to work on a giant Creative Commons icon.

CC Icon Statue” by Creative Commons, generated in part by the DALL-E 2 AI platform. CC dedicates any rights it holds to this image to the public domain via CC0.

Many wonder what role CC licenses, and CC as an organization, can and should play in the future of generative AI. The legal and ethical uncertainty over using copyrighted inputs for training, the uncertainty over the legal status and best practices around works produced by generative AI, and the implications for this technology on the growth and sustainability of the open commons have led CC to examine these issues more closely. We want to address some common questions, while acknowledging that the answers may be complex or still unknown.

We use “artificial intelligence” and “AI” as shorthand terms for what we know is a complex field of technologies and practices, currently involving machine learning and large language models (LLMs). Using the abbreviation “AI” is handy, but not ideal, because we recognize that AI is not really “artificial” (in that AI is created and used by humans), nor “intelligent” (at least in the way we think of human intelligence).

CC licensing and training AI on copyrighted works

Can you use CC licenses to restrict how people use copyrighted works in AI training?

This is among the most common questions that we receive. While the answer depends on the exact circumstances, we want to clear up some misconceptions about how CC licenses function and what they do and do not cover.

You can use CC licenses to grant permission for reuse in any situation that requires permission under copyright. However, the licenses do not supersede existing limitations and exceptions; in other words, as a licensor, you cannot use the licenses to prohibit a use if it is otherwise permitted by limitations and exceptions to copyright.

This is directly relevant to AI, given that the use of copyrighted works to train AI may be protected under existing exceptions and limitations to copyright. For instance, we believe there are strong arguments that, in most cases, using copyrighted works to train generative AI models would be fair use in the United States, and such training can be protected by the text and data mining exception in the EU. However, whether these limitations apply may depend on the particular use case.

It’s also useful to look at this from the perspective of the licensee — the person who wants to use a given work. If a work is CC licensed, does that person need to follow the license in order to use the work in AI training? Not necessarily — it depends on the specific use.

  • To the extent your AI training is covered by an exception or limitation to copyright, you need not rely on CC licenses for the use.
  • To the extent you are relying on CC licenses to train AI, you will need to follow the relevant requirements under the licenses.

Another common question we hear is “Does complying with CC license conditions mean you’re always legally permitted to train AI on that CC-licensed work?”

Not necessarily — it is important to note here that CC licenses only give permission for rights granted by copyright. They do not address where other laws may restrict training AI, such as privacy laws, which are always a consideration where material contains personal data and are not addressed by copyright licensing. (Many kinds of personal data are not covered by copyright at all, but may still be covered by privacy-related regulations.)

For more explanation, see our flowchart regarding the CC licenses in this context, and read more in our FAQ on AI and CC licenses.

A flowchart showing how CC licenses and legal tools intersect with intellectual property and artificial intelligence.

CC Licenses and outputs of generative AI

In the current context of rapidly developing AI technologies and practices, governments scrambling to regulate AI, and courts hearing cases regarding the application of existing law, our intent is to give our community the best guidance available right now. If you create works using generative AI, you can still apply CC licenses to the work you create with the use of those tools and share your work in the ways that you wish. The CC license you choose will apply to the creative work that you contribute to the final product, even if the portion produced by the generative AI system itself may be uncopyrightable. We encourage the use of CC0 for those works that do not involve a significant degree of human creativity, to clarify the intellectual property status of the work and to ensure the public domain grows and thrives.

Beyond copyright

Though using CC licenses and legal tools for training data and works produced by generative AI may address some legal uncertainty, it does not solve all the ethical concerns raised, which go far beyond copyright — involving issues of privacy, consent, bias, economic impacts, and access to and control over technology, among other things. Neither copyright nor CC licenses can or should address all of the ways that AI might impact people. There are no easy solutions, but it is clear we need to step outside of copyright to work together on governance, regulatory frameworks, societal norms, and many other mechanisms to enable us to harness AI technologies and practices for good.

We must empower and engage creators

Generative AI presents an amazing opportunity to be a transformative tool that supports creators — both individuals and organizations — provides new avenues for creation, facilitates better sharing, enables more people to become creators, and benefits the commons of knowledge, information, and creativity for all.

But there are serious concerns, such as issues around author recognition and fair compensation for creators (and the labor market for artistic work in general), the potential flood of AI-generated works on the commons making it difficult to find relevant and trustworthy information, and the disempowering effect of the privatization and enclosure of AI services and outputs, to name a few.

For many creators, these and other issues may be a reason not to share their works at all under any terms, not just via CC licensing. CC wants AI to augment and support commons, not detract from it, and we want to see solutions to these concerns to avoid AI turning creators away from contributing to the commons altogether.

Join us

We believe that trustworthy, ethical generative AI should not be feared, but instead can be beneficial to artists, creators, publishers, and to the public more broadly. Our focuses going forward will be:

  • To develop and share principles, best practices, guidance, and training for using generative AI to support the commons. We don’t have all the answers — or necessarily all the questions — and we will work collaboratively with our community to establish shared principles.
  • To continue to engage our community and broaden it to lift up diverse, global voices and find ways to support different types of sharing and creativity.
  • Additionally, it is imperative that we engage more with AI developers and services to increase their support for transparency and ethical, public-interest tools and practices. CC will be seeking to collaborate with partners who share our values and want to create solutions that support a thriving commons.

For over two decades we have stewarded the legal infrastructure that enables open sharing on the web. We now have an opportunity to reimagine sharing and creativity in this new age. It is time to build new infrastructure that supports better sharing with generative AI.

We invite you to join us in this work, as we continue to openly discuss, deliberate, and take action in this space. Follow along with our blog series on AI, subscribe to our newsletter, support our work, or join us at one of our upcoming events. We’re particularly excited to welcome our community back in-person to Mexico City in October for the CC Global Summit, where the theme is focused squarely on AI & the commons.

The post Understanding CC Licenses and Generative AI appeared first on Creative Commons.

]]>