Innovation
as standard
Learn how data and insight is unlocking hidden value in music catalogs. Subscribe to get early access to content.
Rethinking music fund operations with agentic AI
06 Mar, 2026, by Sam Morey
Introduction
The music rights market is entering a more competitive and institutional phase. Operational maturity is no longer just a back-office function. It is becoming a primary driver of asset value.
The Catalog Maturity Curve introduced a way for funds to benchmark what good looks like operationally. The characteristics at the top of that curve are consistent: the funds that move fastest in competitive processes, report with the highest confidence, and scale efficiently are those where data capability sits at the core of the operating model. Owning and evolving that capability is no longer the preserve of a small number of heavily capitalized platforms. It is becoming a market requirement for all.
Simultaneously, a new generation of agentic development tools - such as Claude Code, Gemini CLI, and Amazon Q - is lowering the traditional cold-start barrier to building these systems. Work that previously required large, highly specialized engineering teams can now be accelerated, allowing subject-matter experts to shape and extend applications directly. This dramatically expands what funds can realistically deliver.
However, it also changes the risk profile. These tools are extremely powerful and highly convincing. But without the right architectural guardrails, governance, and planning, they can lead to businesses quickly accumulating poorly understood technical debt.
This is happening as funds already face complex operational pressures. Addressing these challenges is not just a technology project, it is central to how funds generate long-term value. The model for building internal capabilities is moving toward higher-leverage teams, faster proof of value, and continuous platform evolution.
This article sets out our recommendations and other points for consideration when adopting these new tools. We share our approach that accelerates time-to-value, avoids hidden technical debt, and builds institutional-grade solutions that funds can rely on to not only run, but to outperform in the asset class.
Operational maturity is now the differentiator
The macroeconomic reality for music rights investors has fundamentally changed. Following a period of aggressive acquisitions, catalog entry multiples have now stabilized. The asset class is transitioning from a high-growth phase to a mature, optimization-focused market.
Competition for premium assets remains intense, leading to shrinking margins for error. Furthermore, limited partners (LPs) are increasing their scrutiny, and investors demand greater visibility into both performance and cash flow volatility. In a market where many funds hold similar portfolios of rights, sustainable differentiation will no longer be achieved purely through capital deployment.
As the asset class continues to mature, the cost of capital will become more directly tied to confidence in cash flow. Consequently, data capability can no longer be viewed as an internal efficiency project. It is a critical value creation lever.
To support this trend and address the lack of a universal benchmarking tool for operations and governance, we introduced the Catalog Maturity Curve to allow funds to assess their capabilities relative to peers and identify priorities for further development.
We group the drivers of catalog maturity into four interconnected functions:
- Data & Infrastructure – data architecture, platforms, and team capability
- Legal – rights governance and contractual integrity
- Financial – valuation methodology and performance reporting
- Operational – asset administration and revenue execution
As funds progress along the curve, operations typically evolve from fragmented, manual processes to a scalable, data-driven platform. Reporting becomes clearer, underwriting more defensible, and diligence faster.
The objective is not to reach a fixed “top” of the curve, but to understand your current position and improve in line with your strategy and scale.
The operational gap inside most funds
Despite the billions of dollars flowing into the asset class and significant progress being made, the operational infrastructure within many funds remains strained. Today, funds face a number of consistent challenges including:
- Losing key deals: Valuation tolerances are often too wide, limiting the precision funds need to push bids confidently in competitive processes.
- Point-in-time valuations: Underwriting models and processes are frequently isolated from live operations, making ongoing performance reporting to investors challenging and disjointed.
- Growing pains: As AUM scales, operational complexity increases, turning data ingestion and normalization into administrative bottlenecks.
- Limited market data: There are limited and often imperfect datasets to triangulate royalties earned with consumption or market growth.
- Reporting takes too long: As LPs ask increasingly granular questions, it can take an unsustainable amount of time to assemble and validate the necessary answers.
- SaaS limitations: Off-the-shelf SaaS solutions help standardize data but rarely deliver a complete operating model tailored to a specific fund's investment thesis, and can introduce data sovereignty concerns.
Crucially, these challenges are not fundamentally about software tooling. They are about speed of conviction, building platforms that allow you to be more competitive through deals, scaling AUM without adding unsustainable headcount and building investor confidence so the next funding round is quicker with a lower cost of capital.
What AI actually changes
The emergence of agentic AI tools represents a structural inflection point not only in software engineering, but also day-to-day operations. This is a change in enablement, not a hype cycle.
Command-line tools and agentic development platforms - such as Claude Code, Gemini CLI, and Amazon Q - both erode the traditional cold-start barrier, and significantly speed up development cycles. They can handle traditionally labour intensive tasks in a fraction of the time and allow engineers to reach higher value, more complex tasks significantly faster.
For a music fund, the primary impact is time to value. The goal is to reach more advanced functionality more quickly for comparable cost. From our experience, Jevon’s Paradox is playing out. Our teams aren’t doing less, they’re doing more. They’re getting there faster and pushing the boundaries further.
However, AI is not a silver bullet. Real operational transformation still requires strong architectural discipline: robust data models, clear governance, and organizational alignment.
Owning a customized, high-performance tech-enabled operating platform is no longer a luxury reserved for mega-funds. It is becoming an accessible requirement across the asset class. AI can help close the capability gap, but only when applied with the right rigour and a healthy scepticism toward vendor hype.
The new delivery risk
Although the barriers to entry are falling, the risk profile is changing. These agentic tools are extremely powerful and extremely convincing. They allow smaller teams to move faster and generate significantly more code.
But if misapplied, they can cause businesses to accumulate poorly understood technical debt at speed. Without proper oversight, a fund can easily build a platform that appears functional on the surface but cannot be relied upon to run a portfolio.
Early evidence from engineering teams adopting AI-assisted coding supports this. Research from CodeScene found that without an active focus on code health, AI adoption can lead to 41% more defects, with any early productivity gains disappearing within around eight weeks.
For funds, this can manifest in:
- Numbers that are difficult to explain in board meetings
- Non-repeatable data outputs or processes
- Inconsistent definitions of key numbers with no audit trail
- Fragile ingestion pipelines that break when upstream data changes
- Overly complex infrastructure that becomes difficult to maintain, extend or explain
These problems arise when teams mistake the ability to generate code quickly for the ability to build solutions to business problems. To be effective, agents need to be deployed with a clear understanding of their limitations, with a proper plan in the context of your business, and with clearly defined and understood audit controls.
Choosing wisely
Agentic AI is extremely powerful and improving rapidly. However, it is not a complete solution and it’s definitely not the best solution for all tasks.
The following table gives some examples of where agentic AI can add the most value and where more traditional approaches remain preferable.
In all cases, AI agents need human oversight. They cannot reason or understand the physical word; they are simply very good at probabilistic determination in language. What could sound lucid and completely plausible may be absolutely incorrect.
Where agentic AI adds value
Phase | Activity | Suitable for agentic AI | Considerations & limitations |
|---|---|---|---|
Planning | Requirement gathering | High | Agentic tools are effective for exploring requirements and generating documentation, significantly reducing the preparation burden for internal review and discussion. |
Architectural design | High | LLMs can rapidly explore architectural options and data models, helping teams evaluate multiple approaches. | |
Project planning | High | Agents are effective for scaffolding project plans and drafting documentation. Outputs should always be reviewed, as LLMs can fabricate details or unrealistic timelines. | |
Engineering | Writing code | High | Agents can efficiently generate boilerplate SQL, Python, ETL scripts and other small applications. Skilled engineering oversight is required to maintain a coherent codebase. |
Processing data | Low | LLMs should not perform mathematical calculations directly. Instead, use agents to generate deterministic code (e.g. Python or SQL) that executes calculations repeatedly, reliably and at scale (without eating all your tokens). | |
Normalizing unstructured data | Medium | Agents can use contextual reasoning and fuzzy matching to convert unstructured data into structured formats. Reliability can vary, so outputs should be validated within defined workflows. | |
Reporting | Visualizing outcomes | Low | Consistent reporting is best handled by traditional BI platforms such as Power BI, Looker, or Tableau. LLMs can assist with query generation but should not run reporting logic. |
Financial modeling | Low | Financial models require strict adherence to deterministic formulas. Agents should summarize outputs, not perform valuation calculations. | |
Strategy | Project review and roadmap planning | High | LLMs are highly effective for summarizing project progress, documenting outcomes, and helping teams prioritize future initiatives. |
Our practical recommendations
The following are our recommendations for things to consider when integrating AI tools into your development workflow:
- Don’t skip the design phase: building point solutions can be fast and seem trivial, but if you don’t spend the time to understand the real business pains first, you will end up with a lot of random applications that are rarely used and need lots of maintenance. Software has a half-life, you need to maintain it.
- Use sub-agents: write out the personas for your sub-agents and have them perform specific tasks, ideally in a methodical way so each thing you build goes through the same process (see our example definitions for sub-agents below).
- Check their work: LLMs are extremely convincing, but they do not understand what is logically correct or not. If you don’t spend the time reviewing their work like you would that of a colleague, unexpected things will start to happen and you will lose context of the overall application and architecture - there be dragons!
- Don’t run too many agents in parallel: there is a temptation and a lot of hype around people spinning up armies of agents to write code. Remember, it all needs to be checked by someone who actually knows what they’re doing (for now, a human). You have to be able to keep context and focus where required. We have found running 2 agents on different tasks is the sweet spot for speed and completeness.
- Iterate: building products, teams and processes is an ever-evolving process. Make sure you’re continually challenging how you do things.
If you’re new to sub-agents, we’d recommend watching this video (there are many others out there) to understand the principle and the flow.
The sub-agents we use most often:
- Product manager: write the Business and Product Requirement Documents for you to edit, discuss, and return (integrate your agent into Google Docs/Word for speed)
- Lead architect: build out an architectural plan for the solution you’re about to build
- Test engineer: write the tests you need
- Lead engineer: do the coding
- Reviewer: review the code for improvements and conformation to the PRD
- Security engineer: review the code for security issues/exposure and compliance to any standards such as ISO27001 or PCI
- Documentation writer: write-up documentation for APIs and functionality so you know what has been built and can refer to it in future, including any notes for compliance
- Git expert: chunk up code and write atomic commits, merge branches and push to main
If you’d like to see the specific sub-agent definitions we use with Claude, drop us a note and we’ll send them over.
The path to successful adoption
Adapting to transformative technology is challenging. Firstly you have to decipher what’s hype and what’s not, then you have to move past the experimental phase and find an approach, which not only creates significant value (which is hard), but that’s also repeatable (even harder).
Agentic tools are amazing, but they remain just that: tools, not a complete solution. Until AI can model the physical world, significant limitations will remain. To make progress, you need to consistently revise what you’re doing and where value can be created in the context of your business.
We have a 4 step approach that we follow when working on projects either for our clients, or for ourselves:
- Plan: understand where you are today. What are your current capabilities, and what skills exist within your team? Be clear on your strengths and weaknesses, and make sure you understand where you are trying to get to.
- Prove: identify a single, highly constrained proof point—such as automating the normalization of a particularly messy international royalty statement. Define a tightly scoped proof of concept to get there quickly and build confidence internally and with your LPs.
- Build: scale up the proof of concept by increasing either volume or sophistication, making sure the value created remains aligned with the effort required.
- Evolve: revisit your original plan and decide what’s next - either incremental improvements to your existing solution or a new feature addressing a different problem . Continue to challenge and refine your approach.
The technical frontier is becoming accessible to far more funds, and this process is accelerating. The outcome, however, will not be evenly distributed. The funds that successfully pair rigorous discipline with agentic speed will transform their operations into a decisive competitive advantage in the asset class.
Related
Introducing the Catalog Maturity Curve: A new benchmark for music investment funds
21 Jan, 2026, by Tom Mullen
How the smartest buyers are valuing music catalogs in 2025
29 Oct, 2025, by Emma Griffiths