Major Developments in Copyright
Companies are Raising More Funds
Diverging Visions for the Future
and much more...
Copyrights Granted for AI-Generated Works
In a first (and second) this week, the US Copyright Office has granted protection to two AI-generated works: "A Single Piece of American Cheese" and "A Collection of Objects Which Do Not Exist.”
“A Single Piece of American Cheese” was prepared as a test-case by creators of a tool called Invoke AI (including CEO and artist Kent Keirsey) which uses a method called inpainting to get extremely granular with the generative capabilities of the model. In order to support the argument that this method warranted copyright protection, Invoke prepared and submitted to the USCO a timelapse video detailing the full creation process of the work. They have since released a whitepaper entitled “How We Received The First Copyright for a Single Image Created Entirely with AI-Generated Material.” The registration protects the “selection, coordination, and arrangement” of the AI-generated elements.
“A Collection of Objects Which Do Not Exist,” on the other hand, was registered as “2-D artwork, images generated by artificial intelligence” on the basis of “collage, selection and arrangement.” The logic here is that the artist (anonymous) arranged AI-generated elements in such a manner as to warrant copyright protection.
Court Holds that Training on Copyrighted Works is Not Fair Use
This week, the U.S. District Court for the District of Delaware granted summary judgment in the closely-followed case Thomson Reuters & West v. Ross Intelligence. This summary judgment held that training an AI on copyrighted works is not fair use.
Quoting Judge Bibas in the opinion: "Factors one and four favor Thomson Reuters. Factors two and three favor Ross. Factor two matters less than the others, and factor four matters more. Weighing them all together, I grant summary judgment for Thomson Reuters on fair use."
This is an early win for artists and authors whose works have been used to train AI models without compensation, and is the first of many substantive decisions to come on this issue. While this does not bind the judges in the other cases being tried across the country on AI training issues, it may be a bellwether for how those other judges will interpret these fact patterns.
Meta’s Llama Model Trained on Pirated Works
According to a report from Tom’s Hardware and X user @vxunderground, unsealed court documents reveal that “Meta staff torrented nearly 82TB of pirated books for AI training” from sources including SciHub, ResearchGate, LibGen, Anna's Archive, and Z-Library. Quoting emails revealed in this batch of evidence, Ars Technica calls this the “most damning evidence” against Meta in its ongoing copyright litigation.
OpenAI Reveals Roadmap for GPT-4.5 and GPT-5
In a shocking tweet this week, Sam Altman revealed OpenAI’s product roadmap for GPT-4.5 and GPT-5, including some deviations from previously announced plans.
The highlights include a return to focusing on ease of use and simplicity. GPT-4.5 will be the company’s next release, and is the model formerly known internally as Orion.
The flagship GPT-5 will come next, and will merge the GPT-series of LLMs with the o-series of “reasoning models” capable of chain-of-thought. With that in mind, the previously revealed o3 will now be rebranded and merged into the forthcoming GPT-5.
Best of all, GPT-5 at its “standard intelligence” level will be available for free, with higher intelligence levels available to paid subscribers.
AI Action Summit in Paris
World leaders from government and industry are in Paris this week for the AI Action Summit. These include Vice President JD Vance, who remarked that he is less concerned about “AI safety” as he is about “AI opportunity” in the coming years. Forbes notes that some of the most prevalent themes throughout this year’s summit include public interest AI successes like AlphaFold, the future of work, and geopolitical competition.
Google Walks Back Its Weapons Pledge
In a story I should have covered last week: in 2018, Google made a pledge not build AI for weapons or surveillance. Last week, Google removed this pledge from its website. Andrew Ng, founder of the Google Brain department, expressed that he was “very glad” to see this deletion at the Military Veteran Startup Conference in San Francisco this week. To replace its former pledge, DeepMind CEO Demis Hassabis published a blog post on “responsible AI” which noted that that companies and governments should work together to build AI that “supports national security.”
Sutskever’s Safe Superintelligence Raising Funds
Back in June 2024, OpenAI co-founder Ilya Sutskever made headlines when he announced his new venture Safe Superintelligence. A few months later in September, the startup raised $1B in cash at a $5B valuation. Reports this week surfaced that SSI is now in talks to fundraise at $20B valuation.
Harvey Raises $300M Series D at $3B valuation
This week, legal AI startup Harvey announced its Series D raise. Led by “Sequoia and including Coatue, Kleiner Perkins, OpenAI Startup Fund, GV, Conviction, Elad Gil, and REV, the venture capital arm of RELX Group which owns LexisNexis Legal & Professional,” Harvey has raised an additional $300M at a $3B valuation. The company’s press release cites “scaling agentic workflows” and “building out integrated enterprise use cases” as two goals for this investment.
OpenAI Working on Custom Chip
Reports from Reuters this week claim that OpenAI is “finalizing the design for its first in-house chip in the next few months and plans to send it for fabrication at Taiwan Semiconductor Manufacturing Co.” If OpenAI can successfully design its own chips with TSMC, this would reduce their reliance on Nvidia and allow them to more quickly train new models.
OpenAI Super Bowl Ad
It’s hard to believe it’s been almost a week since the Eagles dominated the Chiefs in Super Bowl 59. Go Birds, by the way. One ad that may have gone unnoticed since it lacked the seemingly-requisite celebrity cameos was a minute-long spot from OpenAI, ushering in their concept of the Intelligence Age.
Elon Musk Makes Acquisition Offer
According to the Wall Street Journal, Elon Musk has made a $97.4B offer to acquire OpenAI. In a statement provided by his attorney, Musk said: “It’s time for OpenAI to return to the open-source, safety-focused force for good it once was. We will make sure that happens.”
Altman responded to Musk on X, writing, "no thank you but we will buy twitter for $9.74 billion if you want.” Musk called Altman a “swindler.”
Anthropic unveils Anthropic Economic Index
Anthropic, the makers of Claude, revealed their “Anthropic Economic Index” this week.
From Anthropic:
The Index’s initial report provides first-of-its-kind data and analysis based on millions of anonymized conversations on Claude.ai, revealing the clearest picture yet of how AI is being incorporated into real-world tasks across the modern economy.
We're also open sourcing the dataset used for this analysis, so researchers can build on and extend our findings. Developing policy responses to address the coming transformation in the labor market and its effects on employment and productivity will take a range of perspectives. To that end, we are also inviting economists, policy experts, and other researchers to provide input on the Index.
The main findings from the Economic Index’s first paper are:
• Today, usage is concentrated in software development and technical writing tasks. Over one-third of occupations (roughly 36%) see AI use in at least a quarter of their associated tasks, while approximately 4% of occupations use it across three-quarters of their associated tasks.
• AI use leans more toward augmentation (57%), where AI collaborates with and enhances human capabilities, compared to automation (43%), where AI directly performs tasks.
• AI use is more prevalent for tasks associated with mid-to-high wage occupations like computer programmers and data scientists, but is lower for both the lowest- and highest-paid roles. This likely reflects both the limits of current AI capabilities, as well as practical barriers to using the technology.
That's it for this week's update. See you next week!