Federico Ramallo

Jun 23, 2024

Should AI Companies Compensate Creators for Training Data?

Federico Ramallo

Jun 23, 2024

Should AI Companies Compensate Creators for Training Data?

Federico Ramallo

Jun 23, 2024

Should AI Companies Compensate Creators for Training Data?

Federico Ramallo

Jun 23, 2024

Should AI Companies Compensate Creators for Training Data?

Federico Ramallo

Jun 23, 2024

Should AI Companies Compensate Creators for Training Data?

Should AI Companies Compensate Creators for Training Data?

I am Raising questions about the lack of a social contract for generative AI training.

Artificial intelligence (AI) has garnered significant attention recently, In particular, large language models (LLMs) like ChatGPT and image generators like Midjourney. These tools identify patterns in input data to generate new outputs. It is essential to note that these are not examples of artificial general intelligence (AGI), which is still a distant goal for tech companies.

AI, contrary to popular belief, is not a new concept. It has been used across various industries for decades. Early examples include machine learning algorithms deployed in space telescopes and long-range radars, as well as AI products like Salesforce's Einstein, launched in 2016. Even in the 1980s, programs like ELIZA demonstrated early AI capabilities by mimicking human conversation patterns.

Throughout history, people have often anthropomorphized AI, attributing human-like qualities to machines. This tendency continues today with chatbots like "Eugene Goostman," designed to mimic a 13-year-old boy, making it more convincing in Turing tests. These tests, based on a concept by Alan Turing, assess a machine's ability to exhibit behavior indistinguishable from a human's.

The AI systems of today, such as LLMs, operate on different foundations, primarily using neural networks to simulate the human brain. The mathematical groundwork for these systems was laid as early as 1943. Despite these advancements, current AI tools still rely heavily on human data for training.

One major concern is the ethical implications of AI training. Companies like OpenAI have scraped vast amounts of online text, including videos, podcasts, and audiobooks, without compensation to the creators. This practice raises questions about the lack of a social contract for generative AI training, as highlighted by musician Ed Newton-Rex.

Fairly Trained, a non-profit organization created by Ed Newton-Rex, has launched a certification program for generative AI companies that respect creators' rights by obtaining consent for the training data they use. There is a growing divide between AI companies that get consent for using this data and those that do not. Fairly Trained aims to help consumers and companies identify ethical AI providers through its certifications.

The first certification, the Licensed Model certification, is awarded to AI models that do not use copyrighted work without a license. This certification excludes models relying on "fair use" exceptions, signaling that the rights-holders' consent was not obtained. Nine companies have already received this certification, including Beatoven.AI, Boomy, BRIA AI, Endel, LifeScore, Rightsify, Somms.ai, Soundful, and Tuney.

The initiative is supported by an advisory committee of experts from various fields, including Tom Gruber, Elizabeth Moody, Maria Pallante, and Max Richter. While the certification is not a complete solution to all issues related to AI training, it highlights the difference between companies that license data and those that use data without consent. Fairly Trained plans to address more complex issues, such as opt-in vs. opt-out data acquisition, with future certifications.

The certification has garnered support from various organizations, including the Association of American Publishers, the Association of Independent Music Publishers, Concord, Pro Sound Effects, and Universal Music Group. Christopher Horton from Universal Music Group emphasized the importance of ethical AI practices that support creativity and respect copyright.

Fairly Trained invites generative AI companies that do not rely on scraping and fair use for their training data to apply for certification, which can cover either the entire organization or specific models.

Another issue is the sudden accessibility of AI to the general public, leading to concerns about transparency and ethical use. AI systems like Amazon’s "Just Walk Out" shopping system rely on human labor for accuracy, often obscured by tech giants. Additionally, the exploitation of workers in developing countries for tasks like identifying graphic content further underscores the ethical dilemmas.

The complexity of neural networks, often described as a "black box," poses significant challenges. Understanding how these systems make decisions is crucial to prevent biases and ensure proper functionality. Despite these issues, AI has numerous positive applications, such as language dubbing in videos, automated content summarization, and enhanced photo editing.

AI is not a panacea and still requires significant human intervention. The resources needed for training and operating AI models, including electricity, water, and data center space, are substantial. Companies must be held accountable for their handling of training data, and the public must advocate for responsible AI use and support for human creators.


Should AI Companies Compensate Creators for Training Data?

I am Raising questions about the lack of a social contract for generative AI training.

Artificial intelligence (AI) has garnered significant attention recently, In particular, large language models (LLMs) like ChatGPT and image generators like Midjourney. These tools identify patterns in input data to generate new outputs. It is essential to note that these are not examples of artificial general intelligence (AGI), which is still a distant goal for tech companies.

AI, contrary to popular belief, is not a new concept. It has been used across various industries for decades. Early examples include machine learning algorithms deployed in space telescopes and long-range radars, as well as AI products like Salesforce's Einstein, launched in 2016. Even in the 1980s, programs like ELIZA demonstrated early AI capabilities by mimicking human conversation patterns.

Throughout history, people have often anthropomorphized AI, attributing human-like qualities to machines. This tendency continues today with chatbots like "Eugene Goostman," designed to mimic a 13-year-old boy, making it more convincing in Turing tests. These tests, based on a concept by Alan Turing, assess a machine's ability to exhibit behavior indistinguishable from a human's.

The AI systems of today, such as LLMs, operate on different foundations, primarily using neural networks to simulate the human brain. The mathematical groundwork for these systems was laid as early as 1943. Despite these advancements, current AI tools still rely heavily on human data for training.

One major concern is the ethical implications of AI training. Companies like OpenAI have scraped vast amounts of online text, including videos, podcasts, and audiobooks, without compensation to the creators. This practice raises questions about the lack of a social contract for generative AI training, as highlighted by musician Ed Newton-Rex.

Fairly Trained, a non-profit organization created by Ed Newton-Rex, has launched a certification program for generative AI companies that respect creators' rights by obtaining consent for the training data they use. There is a growing divide between AI companies that get consent for using this data and those that do not. Fairly Trained aims to help consumers and companies identify ethical AI providers through its certifications.

The first certification, the Licensed Model certification, is awarded to AI models that do not use copyrighted work without a license. This certification excludes models relying on "fair use" exceptions, signaling that the rights-holders' consent was not obtained. Nine companies have already received this certification, including Beatoven.AI, Boomy, BRIA AI, Endel, LifeScore, Rightsify, Somms.ai, Soundful, and Tuney.

The initiative is supported by an advisory committee of experts from various fields, including Tom Gruber, Elizabeth Moody, Maria Pallante, and Max Richter. While the certification is not a complete solution to all issues related to AI training, it highlights the difference between companies that license data and those that use data without consent. Fairly Trained plans to address more complex issues, such as opt-in vs. opt-out data acquisition, with future certifications.

The certification has garnered support from various organizations, including the Association of American Publishers, the Association of Independent Music Publishers, Concord, Pro Sound Effects, and Universal Music Group. Christopher Horton from Universal Music Group emphasized the importance of ethical AI practices that support creativity and respect copyright.

Fairly Trained invites generative AI companies that do not rely on scraping and fair use for their training data to apply for certification, which can cover either the entire organization or specific models.

Another issue is the sudden accessibility of AI to the general public, leading to concerns about transparency and ethical use. AI systems like Amazon’s "Just Walk Out" shopping system rely on human labor for accuracy, often obscured by tech giants. Additionally, the exploitation of workers in developing countries for tasks like identifying graphic content further underscores the ethical dilemmas.

The complexity of neural networks, often described as a "black box," poses significant challenges. Understanding how these systems make decisions is crucial to prevent biases and ensure proper functionality. Despite these issues, AI has numerous positive applications, such as language dubbing in videos, automated content summarization, and enhanced photo editing.

AI is not a panacea and still requires significant human intervention. The resources needed for training and operating AI models, including electricity, water, and data center space, are substantial. Companies must be held accountable for their handling of training data, and the public must advocate for responsible AI use and support for human creators.


Should AI Companies Compensate Creators for Training Data?

I am Raising questions about the lack of a social contract for generative AI training.

Artificial intelligence (AI) has garnered significant attention recently, In particular, large language models (LLMs) like ChatGPT and image generators like Midjourney. These tools identify patterns in input data to generate new outputs. It is essential to note that these are not examples of artificial general intelligence (AGI), which is still a distant goal for tech companies.

AI, contrary to popular belief, is not a new concept. It has been used across various industries for decades. Early examples include machine learning algorithms deployed in space telescopes and long-range radars, as well as AI products like Salesforce's Einstein, launched in 2016. Even in the 1980s, programs like ELIZA demonstrated early AI capabilities by mimicking human conversation patterns.

Throughout history, people have often anthropomorphized AI, attributing human-like qualities to machines. This tendency continues today with chatbots like "Eugene Goostman," designed to mimic a 13-year-old boy, making it more convincing in Turing tests. These tests, based on a concept by Alan Turing, assess a machine's ability to exhibit behavior indistinguishable from a human's.

The AI systems of today, such as LLMs, operate on different foundations, primarily using neural networks to simulate the human brain. The mathematical groundwork for these systems was laid as early as 1943. Despite these advancements, current AI tools still rely heavily on human data for training.

One major concern is the ethical implications of AI training. Companies like OpenAI have scraped vast amounts of online text, including videos, podcasts, and audiobooks, without compensation to the creators. This practice raises questions about the lack of a social contract for generative AI training, as highlighted by musician Ed Newton-Rex.

Fairly Trained, a non-profit organization created by Ed Newton-Rex, has launched a certification program for generative AI companies that respect creators' rights by obtaining consent for the training data they use. There is a growing divide between AI companies that get consent for using this data and those that do not. Fairly Trained aims to help consumers and companies identify ethical AI providers through its certifications.

The first certification, the Licensed Model certification, is awarded to AI models that do not use copyrighted work without a license. This certification excludes models relying on "fair use" exceptions, signaling that the rights-holders' consent was not obtained. Nine companies have already received this certification, including Beatoven.AI, Boomy, BRIA AI, Endel, LifeScore, Rightsify, Somms.ai, Soundful, and Tuney.

The initiative is supported by an advisory committee of experts from various fields, including Tom Gruber, Elizabeth Moody, Maria Pallante, and Max Richter. While the certification is not a complete solution to all issues related to AI training, it highlights the difference between companies that license data and those that use data without consent. Fairly Trained plans to address more complex issues, such as opt-in vs. opt-out data acquisition, with future certifications.

The certification has garnered support from various organizations, including the Association of American Publishers, the Association of Independent Music Publishers, Concord, Pro Sound Effects, and Universal Music Group. Christopher Horton from Universal Music Group emphasized the importance of ethical AI practices that support creativity and respect copyright.

Fairly Trained invites generative AI companies that do not rely on scraping and fair use for their training data to apply for certification, which can cover either the entire organization or specific models.

Another issue is the sudden accessibility of AI to the general public, leading to concerns about transparency and ethical use. AI systems like Amazon’s "Just Walk Out" shopping system rely on human labor for accuracy, often obscured by tech giants. Additionally, the exploitation of workers in developing countries for tasks like identifying graphic content further underscores the ethical dilemmas.

The complexity of neural networks, often described as a "black box," poses significant challenges. Understanding how these systems make decisions is crucial to prevent biases and ensure proper functionality. Despite these issues, AI has numerous positive applications, such as language dubbing in videos, automated content summarization, and enhanced photo editing.

AI is not a panacea and still requires significant human intervention. The resources needed for training and operating AI models, including electricity, water, and data center space, are substantial. Companies must be held accountable for their handling of training data, and the public must advocate for responsible AI use and support for human creators.


Should AI Companies Compensate Creators for Training Data?

I am Raising questions about the lack of a social contract for generative AI training.

Artificial intelligence (AI) has garnered significant attention recently, In particular, large language models (LLMs) like ChatGPT and image generators like Midjourney. These tools identify patterns in input data to generate new outputs. It is essential to note that these are not examples of artificial general intelligence (AGI), which is still a distant goal for tech companies.

AI, contrary to popular belief, is not a new concept. It has been used across various industries for decades. Early examples include machine learning algorithms deployed in space telescopes and long-range radars, as well as AI products like Salesforce's Einstein, launched in 2016. Even in the 1980s, programs like ELIZA demonstrated early AI capabilities by mimicking human conversation patterns.

Throughout history, people have often anthropomorphized AI, attributing human-like qualities to machines. This tendency continues today with chatbots like "Eugene Goostman," designed to mimic a 13-year-old boy, making it more convincing in Turing tests. These tests, based on a concept by Alan Turing, assess a machine's ability to exhibit behavior indistinguishable from a human's.

The AI systems of today, such as LLMs, operate on different foundations, primarily using neural networks to simulate the human brain. The mathematical groundwork for these systems was laid as early as 1943. Despite these advancements, current AI tools still rely heavily on human data for training.

One major concern is the ethical implications of AI training. Companies like OpenAI have scraped vast amounts of online text, including videos, podcasts, and audiobooks, without compensation to the creators. This practice raises questions about the lack of a social contract for generative AI training, as highlighted by musician Ed Newton-Rex.

Fairly Trained, a non-profit organization created by Ed Newton-Rex, has launched a certification program for generative AI companies that respect creators' rights by obtaining consent for the training data they use. There is a growing divide between AI companies that get consent for using this data and those that do not. Fairly Trained aims to help consumers and companies identify ethical AI providers through its certifications.

The first certification, the Licensed Model certification, is awarded to AI models that do not use copyrighted work without a license. This certification excludes models relying on "fair use" exceptions, signaling that the rights-holders' consent was not obtained. Nine companies have already received this certification, including Beatoven.AI, Boomy, BRIA AI, Endel, LifeScore, Rightsify, Somms.ai, Soundful, and Tuney.

The initiative is supported by an advisory committee of experts from various fields, including Tom Gruber, Elizabeth Moody, Maria Pallante, and Max Richter. While the certification is not a complete solution to all issues related to AI training, it highlights the difference between companies that license data and those that use data without consent. Fairly Trained plans to address more complex issues, such as opt-in vs. opt-out data acquisition, with future certifications.

The certification has garnered support from various organizations, including the Association of American Publishers, the Association of Independent Music Publishers, Concord, Pro Sound Effects, and Universal Music Group. Christopher Horton from Universal Music Group emphasized the importance of ethical AI practices that support creativity and respect copyright.

Fairly Trained invites generative AI companies that do not rely on scraping and fair use for their training data to apply for certification, which can cover either the entire organization or specific models.

Another issue is the sudden accessibility of AI to the general public, leading to concerns about transparency and ethical use. AI systems like Amazon’s "Just Walk Out" shopping system rely on human labor for accuracy, often obscured by tech giants. Additionally, the exploitation of workers in developing countries for tasks like identifying graphic content further underscores the ethical dilemmas.

The complexity of neural networks, often described as a "black box," poses significant challenges. Understanding how these systems make decisions is crucial to prevent biases and ensure proper functionality. Despite these issues, AI has numerous positive applications, such as language dubbing in videos, automated content summarization, and enhanced photo editing.

AI is not a panacea and still requires significant human intervention. The resources needed for training and operating AI models, including electricity, water, and data center space, are substantial. Companies must be held accountable for their handling of training data, and the public must advocate for responsible AI use and support for human creators.


Should AI Companies Compensate Creators for Training Data?

I am Raising questions about the lack of a social contract for generative AI training.

Artificial intelligence (AI) has garnered significant attention recently, In particular, large language models (LLMs) like ChatGPT and image generators like Midjourney. These tools identify patterns in input data to generate new outputs. It is essential to note that these are not examples of artificial general intelligence (AGI), which is still a distant goal for tech companies.

AI, contrary to popular belief, is not a new concept. It has been used across various industries for decades. Early examples include machine learning algorithms deployed in space telescopes and long-range radars, as well as AI products like Salesforce's Einstein, launched in 2016. Even in the 1980s, programs like ELIZA demonstrated early AI capabilities by mimicking human conversation patterns.

Throughout history, people have often anthropomorphized AI, attributing human-like qualities to machines. This tendency continues today with chatbots like "Eugene Goostman," designed to mimic a 13-year-old boy, making it more convincing in Turing tests. These tests, based on a concept by Alan Turing, assess a machine's ability to exhibit behavior indistinguishable from a human's.

The AI systems of today, such as LLMs, operate on different foundations, primarily using neural networks to simulate the human brain. The mathematical groundwork for these systems was laid as early as 1943. Despite these advancements, current AI tools still rely heavily on human data for training.

One major concern is the ethical implications of AI training. Companies like OpenAI have scraped vast amounts of online text, including videos, podcasts, and audiobooks, without compensation to the creators. This practice raises questions about the lack of a social contract for generative AI training, as highlighted by musician Ed Newton-Rex.

Fairly Trained, a non-profit organization created by Ed Newton-Rex, has launched a certification program for generative AI companies that respect creators' rights by obtaining consent for the training data they use. There is a growing divide between AI companies that get consent for using this data and those that do not. Fairly Trained aims to help consumers and companies identify ethical AI providers through its certifications.

The first certification, the Licensed Model certification, is awarded to AI models that do not use copyrighted work without a license. This certification excludes models relying on "fair use" exceptions, signaling that the rights-holders' consent was not obtained. Nine companies have already received this certification, including Beatoven.AI, Boomy, BRIA AI, Endel, LifeScore, Rightsify, Somms.ai, Soundful, and Tuney.

The initiative is supported by an advisory committee of experts from various fields, including Tom Gruber, Elizabeth Moody, Maria Pallante, and Max Richter. While the certification is not a complete solution to all issues related to AI training, it highlights the difference between companies that license data and those that use data without consent. Fairly Trained plans to address more complex issues, such as opt-in vs. opt-out data acquisition, with future certifications.

The certification has garnered support from various organizations, including the Association of American Publishers, the Association of Independent Music Publishers, Concord, Pro Sound Effects, and Universal Music Group. Christopher Horton from Universal Music Group emphasized the importance of ethical AI practices that support creativity and respect copyright.

Fairly Trained invites generative AI companies that do not rely on scraping and fair use for their training data to apply for certification, which can cover either the entire organization or specific models.

Another issue is the sudden accessibility of AI to the general public, leading to concerns about transparency and ethical use. AI systems like Amazon’s "Just Walk Out" shopping system rely on human labor for accuracy, often obscured by tech giants. Additionally, the exploitation of workers in developing countries for tasks like identifying graphic content further underscores the ethical dilemmas.

The complexity of neural networks, often described as a "black box," poses significant challenges. Understanding how these systems make decisions is crucial to prevent biases and ensure proper functionality. Despite these issues, AI has numerous positive applications, such as language dubbing in videos, automated content summarization, and enhanced photo editing.

AI is not a panacea and still requires significant human intervention. The resources needed for training and operating AI models, including electricity, water, and data center space, are substantial. Companies must be held accountable for their handling of training data, and the public must advocate for responsible AI use and support for human creators.


Guadalajara

Werkshop - Av. Acueducto 6050, Lomas del bosque, Plaza Acueducto. 45116,

Zapopan, Jalisco. México.

Texas
5700 Granite Parkway, Suite 200, Plano, Texas 75024.

© Density Labs. All Right reserved. Privacy policy and Terms of Use.

Guadalajara

Werkshop - Av. Acueducto 6050, Lomas del bosque, Plaza Acueducto. 45116,

Zapopan, Jalisco. México.

Texas
5700 Granite Parkway, Suite 200, Plano, Texas 75024.

© Density Labs. All Right reserved. Privacy policy and Terms of Use.

Guadalajara

Werkshop - Av. Acueducto 6050, Lomas del bosque, Plaza Acueducto. 45116,

Zapopan, Jalisco. México.

Texas
5700 Granite Parkway, Suite 200, Plano, Texas 75024.

© Density Labs. All Right reserved. Privacy policy and Terms of Use.