
2025年2月,OpenAI高管曾前往洛杉矶希望与好莱坞各大电影公司达成合作,最终空手而归。据彭博社(Bloomberg)报道,电影公司对OpenAI处理数据的方式存在怀疑,且2023年好莱坞罢工之后,工会担心工作岗位流失可能引发抵制,所以拒绝使用该公司的人工智能视频生成工具Sora。
OpenAI与好莱坞合作失败其实暴露了更深层次的问题,即该公司似乎不愿证明自己能遵守规范了娱乐行业一百多年的合同、版权协议和劳动保护等规则。OpenAI不仅疏远了价值超过千亿美元的娱乐行业,更是错失向其他行业展示其有能力建立可靠且长期合作关系的机会。
OpenAI在好莱坞碰壁不仅让人想起早年间媒体行业与硅谷的争端。2000年代初,Napster似乎有望为用户提供海量数字音乐从而颠覆音乐行业。Napster的服务一度成为文化现象,但其运营模式建立在分发未经授权(即盗版)音乐的基础之上。从长远来看,该公司为无视版权法付出了沉重代价。各大唱片公司纷纷提起诉讼,等到Napster想明白谈版权协议时为时已晚。音乐产业早已翻篇,转而和Rhapsody、iTunes以及后来的 Spotify等平台达成可持续合作协议。OpenAI的技术远比Napster强大,然而结局可能如出一辙。
和Napster一样,OpenAI没意识到与现有行业合作才是实现长期增长的更好途径。OpenAI没有选择跟创作者合作,而是直接抓取新闻文章,未获得授权就使用大量书籍资源,还推出了一款听起来很像女演员斯嘉丽·约翰逊声音的语音助手(此前她曾拒绝授权)。因约翰逊事件受到批评后,OpenAI声称语音来自另一位未具名的女演员,继续我行我素。这种“先上车再买票”的策略或许曾帮OpenAI在生成式人工智能领域保持领先,但不可持续。
对电影公司友好的人工智能
就像当年的Napster,OpenAI正为电影行业与尊重知识产权的人工智能公司合作铺路。狮门影业(Lionsgate)就宣布与Runway合作,打造只用狮门片库训练的专属人工智能模型。该模型透明可控,狮门清楚知道正使用哪些知识产权,还能把收益内部分配或用于再投资。詹姆斯·卡梅隆和StabilityAI公司展开合作,将人工智能应用在特效制作领域。资深电影纸片人彼得·切尔宁携手风投基金Andreessen Horowitz,创立了专注于人工智能的电影公司Promise。
相关公司与OpenAI不同之处在于,要么只使用获得授权的数据训练模型,要么在艺术家可控的有限范围内运用人工智能,又或是在好莱坞直接参与下成立新电影公司。然而OpenAI坚持掌控一切又不理会电影人的顾虑,最终可能会发现自己被拒之门外。
OpenAI应该学学曾经将监管当成障碍,后来才发现合作好处多多的技术公司,通常都是在和监管机构大战一场之后才会醒悟。举例来说,Uber一直自我标榜“颠覆者”,最终还是与伦敦和华盛顿特区等城市政府合作,拿下市政合同的同时也提升了市场信任。
OpenAI仍有时间说服传统行业,能做到尊重知识产权、数据隐私和劳动规则。
第一步,OpenAI应该提升人工智能训练数据方面的透明度,帮助电影公司和工会了解正在使用哪些受版权保护的内容。比如建立内容溯源系统,追踪人工智能生成的剧本或表演等内容,并不需要把整个模型都公开。OpenAI、电影公司和创作者可以请第三方审计,证明模型按照约定的数据限制和标准开发。如此既能满足要求,又能保护专有信息。
OpenAI还应该同意以某种方式与版权所有者分享收益。虽然电影行业可能无法复制 Spotify式版税系统,但设立创意基金的核心理念依然可行,尤其是在像Runway与狮门影业之类可控的合作案例。思路不是按使用次数付费,而是对预先获批的数据集进行授权许可,分享因使用相关内容生成视频带来的收益。
OpenAI在好莱坞的机会
整合数据其实有很大机会。OpenAI可以参考类似模式,与工会及各电影公司合作开发已获得授权的版权内容套餐。哪怕只是有限的系统,也能体现出合作的诚意。对OpenAI来说,最大的风险不是细节处理不当,而眼睁睁看着竞争对手大步向前自己却无所作为。
顺着这一思路,OpenAI应该更广泛地与好莱坞接触,不仅包括电影公司,还有从业者和创作者。2023年的罢工事件表明,工会能影响公众舆论和相关政策走向,长远战略必须加以考虑。将人工智能带来的部分收益分配给行业专业人士,就表明OpenAI意在与人类人才携手合作,而不会绕过他们。如此举措能让人工智能被看成创意伙伴而不是威胁,也有助于OpenAI从运作不透明的通用模型中脱颖而出。
上个月,400多名好莱坞创作者联名致信白宫,主张人工智能公司应像其他行业一样遵循版权法。“削弱或消除帮助美国繁荣的版权保护制度完全没理由,”信中写道。OpenAI拖得越久,越会给其他公司抢先一步的机会。
马特・斯坦伯格是乔治城大学技术与公共政策学者。此前他是负责电影和电视开发的高管。(财富中文网)
译者:梁宇
审校:夏林
2025年2月,OpenAI高管曾前往洛杉矶希望与好莱坞各大电影公司达成合作,最终空手而归。据彭博社(Bloomberg)报道,电影公司对OpenAI处理数据的方式存在怀疑,且2023年好莱坞罢工之后,工会担心工作岗位流失可能引发抵制,所以拒绝使用该公司的人工智能视频生成工具Sora。
OpenAI与好莱坞合作失败其实暴露了更深层次的问题,即该公司似乎不愿证明自己能遵守规范了娱乐行业一百多年的合同、版权协议和劳动保护等规则。OpenAI不仅疏远了价值超过千亿美元的娱乐行业,更是错失向其他行业展示其有能力建立可靠且长期合作关系的机会。
OpenAI在好莱坞碰壁不仅让人想起早年间媒体行业与硅谷的争端。2000年代初,Napster似乎有望为用户提供海量数字音乐从而颠覆音乐行业。Napster的服务一度成为文化现象,但其运营模式建立在分发未经授权(即盗版)音乐的基础之上。从长远来看,该公司为无视版权法付出了沉重代价。各大唱片公司纷纷提起诉讼,等到Napster想明白谈版权协议时为时已晚。音乐产业早已翻篇,转而和Rhapsody、iTunes以及后来的 Spotify等平台达成可持续合作协议。OpenAI的技术远比Napster强大,然而结局可能如出一辙。
和Napster一样,OpenAI没意识到与现有行业合作才是实现长期增长的更好途径。OpenAI没有选择跟创作者合作,而是直接抓取新闻文章,未获得授权就使用大量书籍资源,还推出了一款听起来很像女演员斯嘉丽·约翰逊声音的语音助手(此前她曾拒绝授权)。因约翰逊事件受到批评后,OpenAI声称语音来自另一位未具名的女演员,继续我行我素。这种“先上车再买票”的策略或许曾帮OpenAI在生成式人工智能领域保持领先,但不可持续。
对电影公司友好的人工智能
就像当年的Napster,OpenAI正为电影行业与尊重知识产权的人工智能公司合作铺路。狮门影业(Lionsgate)就宣布与Runway合作,打造只用狮门片库训练的专属人工智能模型。该模型透明可控,狮门清楚知道正使用哪些知识产权,还能把收益内部分配或用于再投资。詹姆斯·卡梅隆和StabilityAI公司展开合作,将人工智能应用在特效制作领域。资深电影纸片人彼得·切尔宁携手风投基金Andreessen Horowitz,创立了专注于人工智能的电影公司Promise。
相关公司与OpenAI不同之处在于,要么只使用获得授权的数据训练模型,要么在艺术家可控的有限范围内运用人工智能,又或是在好莱坞直接参与下成立新电影公司。然而OpenAI坚持掌控一切又不理会电影人的顾虑,最终可能会发现自己被拒之门外。
OpenAI应该学学曾经将监管当成障碍,后来才发现合作好处多多的技术公司,通常都是在和监管机构大战一场之后才会醒悟。举例来说,Uber一直自我标榜“颠覆者”,最终还是与伦敦和华盛顿特区等城市政府合作,拿下市政合同的同时也提升了市场信任。
OpenAI仍有时间说服传统行业,能做到尊重知识产权、数据隐私和劳动规则。
第一步,OpenAI应该提升人工智能训练数据方面的透明度,帮助电影公司和工会了解正在使用哪些受版权保护的内容。比如建立内容溯源系统,追踪人工智能生成的剧本或表演等内容,并不需要把整个模型都公开。OpenAI、电影公司和创作者可以请第三方审计,证明模型按照约定的数据限制和标准开发。如此既能满足要求,又能保护专有信息。
OpenAI还应该同意以某种方式与版权所有者分享收益。虽然电影行业可能无法复制 Spotify式版税系统,但设立创意基金的核心理念依然可行,尤其是在像Runway与狮门影业之类可控的合作案例。思路不是按使用次数付费,而是对预先获批的数据集进行授权许可,分享因使用相关内容生成视频带来的收益。
OpenAI在好莱坞的机会
整合数据其实有很大机会。OpenAI可以参考类似模式,与工会及各电影公司合作开发已获得授权的版权内容套餐。哪怕只是有限的系统,也能体现出合作的诚意。对OpenAI来说,最大的风险不是细节处理不当,而眼睁睁看着竞争对手大步向前自己却无所作为。
顺着这一思路,OpenAI应该更广泛地与好莱坞接触,不仅包括电影公司,还有从业者和创作者。2023年的罢工事件表明,工会能影响公众舆论和相关政策走向,长远战略必须加以考虑。将人工智能带来的部分收益分配给行业专业人士,就表明OpenAI意在与人类人才携手合作,而不会绕过他们。如此举措能让人工智能被看成创意伙伴而不是威胁,也有助于OpenAI从运作不透明的通用模型中脱颖而出。
上个月,400多名好莱坞创作者联名致信白宫,主张人工智能公司应像其他行业一样遵循版权法。“削弱或消除帮助美国繁荣的版权保护制度完全没理由,”信中写道。OpenAI拖得越久,越会给其他公司抢先一步的机会。
马特・斯坦伯格是乔治城大学技术与公共政策学者。此前他是负责电影和电视开发的高管。(财富中文网)
译者:梁宇
审校:夏林
In February, executives from OpenAI visited Los Angeles, hoping to strike deals with major Hollywood studios. They left empty handed. The studios declined partnerships to use Sora, the company’s AI–powered video generation tool, Bloomberg reported, citing concerns over how OpenAI would use their data and potential backlash from unions worried about job losses following the 2023 Hollywood strikes.
OpenAI’s failure to win over Hollywood exposes a deeper issue for the company: It seems unwilling to prove it can work within the contracts, licensing agreements, and labor protections that have governed the entertainment business for more than a century. OpenAI isn’t just alienating the $100 billion-plus entertainment business—it is missing an opportunity to show other industries that it’s capable of building viable, long-term partnerships.
OpenAI’s Hollywood misadventure is reminiscent of an earlier dispute between the media industry and Silicon Valley. In the early 2000s, Napster seemed poised to upend the music industry by offering users an unprecedented digital catalog of songs. The service became a cultural phenomenon, but it was predicated on distributing unlicensed (stolen) music, and the company’s refusal to engage with copyright law proved costly in the long run. Major labels sued, and by the time Napster realized it was better off negotiating licensing deals, it was too late. The music industry had moved on, opting for sustainable agreements with services such as Rhapsody, iTunes, and eventually Spotify. OpenAI’s technology is far more transformative than Napster’s, but its story could look the same.
Like Napster, OpenAI fails to see that working with established industries is the better path to long-term growth. Instead of engaging with creators, OpenAI has scraped news articles, ingested entire libraries of books without securing rights, and launched a voice assistant that sounded a lot like Scarlett Johansson (who had previously denied permission). Amid criticism around the Johansson case, OpenAI claimed the voice belonged to a different unnamed actress and carried on. That strategy—move fast, ask for permission later—may have worked so far to make OpenAI the dominant player in generative AI, but it’s not sustainable.
Studio-friendly AI
As Napster did, OpenAI is opening the door for the film industry to partner with AI companies that respect its intellectual property. Lionsgate announced a partnership with Runway to build a proprietary AI model trained exclusively on the studio’s catalog. The resulting model will be fully transparent and controlled—Lionsgate knows exactly what IP is being used and can distribute revenue internally or reinvest it. Similarly, James Cameron has teamed with StabilityAI to bring AI to special effects, and veteran film executive Peter Chernin joined with Andreessen Horowitz to launch an AI-native studio, Promise.
These ventures differ from OpenAI in that they are either training models exclusively on licensed data, using AI in narrow, artist-controlled pipelines or building new studios with Hollywood’s direct involvement. By insisting on control and not acknowledging filmmakers’ concerns, OpenAI may eventually find itself on the outside looking in.
OpenAI should learn from other tech companies that once saw regulation as an obstacle but later realized the benefits of cooperation—often after bruising fights with regulators. For example, Uber touted itself as a “disruptor,” yet it eventually worked with city governments including London and Washington, D.C. to secure municipal contracts and bolster market trust.
OpenAI still has time to convince legacy industries that it can respect intellectual property rights, data privacy, and labor rules.
As a first step, OpenAI should offer more transparency around its AI training data, helping studios and unions understand what copyrighted material is being used. A content provenance system that traces AI-generated outputs like scripts or performances would not require the model to be fully disclosed. OpenAI, studios, and creators could rely on third-party audits to certify that the models were developed with agreed-upon data restrictions and standards. This can be done while still protecting proprietary information.
OpenAI should also agree to share revenue with rights holders in some way. A Spotify-style royalty system may not be replicable for film, but the core idea of a creative fund is still viable—especially in controlled cases like Runway’s deal with Lionsgate. The idea isn’t to pay per use, but to license pre-approved datasets and share revenue tied to the videos derived from that content.
OpenAI in Hollywood
There’s a real opportunity in bundling data: OpenAI could adopt a similar model using licensed bundles of copyrighted content, developed in partnership with unions and individual studios. Even a limited system would demonstrate a willingness to collaborate. The greatest risk for OpenAI isn’t in getting the details wrong—it’s in doing nothing while competitors move ahead.
In this vein, it’s in OpenAI’s best interest to engage more broadly with Hollywood—not just studios, but labor and creators. The 2023 strikes showed that unions shape public narratives and policy, and any long-term strategy must reflect that. Funneling a portion of AI-driven revenue to industry professionals would signal an intent to work with, not around, human talent. This kind of initiative could reframe AI as a creative partner, not a threat, and help OpenAI stand apart from opaque general-purpose models.
Last month, more than 400 Hollywood creatives sent a letter to the White House, arguing that AI companies should follow copyright law like any other industry: “There is no reason to weaken or eliminate the copyright protections that have helped America flourish,” the letter said. The longer OpenAI waits to act, the more it opens the door for others to do so first.
Matt Steinberg is a Tech and Public Policy Scholar at Georgetown University. He was previously a film and television development executive.