A Guide to IP and Asset Strategy in the Age of AI

0
A Guide to IP and Asset Strategy in the Age of AI

1. Overview

The healthcare and biotech industry is on the brink of a data-driven revolution, with artificial intelligence (AI) poised to transform drug discovery, personalised medicine, and healthcare delivery.  However, this revolution brings a new array of data ownership, privacy, and security challenges.  The following addresses these critical questions, exploring how biotech companies can navigate this complex landscape while safeguarding their intellectual property, driving innovation, and fostering trust.  This discussion will examine strategies for balancing open collaboration with strong IP protection, ensuring data privacy and security, developing explainable and reliable AI, navigating regulatory uncertainties, and managing the risks tied to AI-driven healthcare solutions.  Ultimately, it will underscore the essential role of leadership in cultivating a data-driven culture that views data as a strategic asset and encourages employees to use it responsibly to advance the healthcare and biotech industry.

2. Data Ownership, Access, and Corresponding IP Strategy Considerations

Building a successful AI program in the healthcare sector depends on collaboration, as merging scientific, operational, and business expertise is crucial for achieving both innovation and capital efficiency.  Each area of expertise plays a vital role, and their integration often forms the foundation for long-term success.

In the world of AI, it is no longer enough to have a lot of data.  The real game-changer lies in having the right data – high-quality, relevant to the specific problem, and easily usable.  Think of it like this: a massive warehouse filled with random objects is not nearly as valuable as a smaller, well-organised workshop stocked with the precise tools and materials needed for a specific project.  Similarly, a massive dataset of generic information is far less valuable than a carefully curated dataset containing the accurate information required to train an AI model for a specific task.

This shift in focus from quantity to quality has significant implications for businesses.  Algorithms are rapidly becoming commodities, with open-source models and readily available tools levelling the playing field.  Competitive advantages now hinge on exclusive access to high-quality, relevant data and the legal right to use it.  Protecting this valuable data, often as trade secrets, is becoming more crucial than relying on patents to safeguard algorithms.  Data-related agreements are the bedrock of any AI-driven company, and any weakness in these agreements or overall data governance can have devastating consequences.  For instance, AI-based companies and programs must secure comprehensive data usage rights, covering both research and commercial applications, before even beginning to train their AI models.  Failure to do so could leave them vulnerable to losing control over valuable insights they generate, handing leverage to data rights holders.  Industry stakeholders recognise this crucial dynamic and expect developers to proactively address these challenges to inspire confidence in the long-term viability of an AI program.

A weak or unclear data strategy poses a significant red flag for stakeholders.  AI-based companies and programs with a well-defined plan and robust data hygiene – encompassing data acquisition, storage, management, and usage – can outshine their competitors.  This discipline not only minimises risks but also shows the operational maturity that stakeholders look for in a high-potential product opportunity.

Moreover, reconciling a data strategy with a sound IP strategy can become difficult for companies.  As healthcare and biotech converge with AI, creators of data-driven business models often struggle to balance IP protection and data protection.  However, these challenges can be alleviated through prioritisation and planning.

Patents and trade secrets do not safeguard the data underlying inventions; instead, they protect the insights derived from analysing that data.  For instance, patents and trade secrets can be combined to cover methodologies used to generate clinical insights, such as processes for identifying new drug targets or biomarkers.  Therefore, data analysis and insight generation are essential prerequisites for obtaining patent protection.

The healthcare and biotech industries must adopt a mindset of “asking for permission, not forgiveness” regarding data ownership and access.  Attempting to rectify mistakes after they occur can be costly and should be avoided at all costs.  Thus, the top priority for a data-driven business model is ensuring ownership or control of data assets to a degree that aligns with both the company’s planned and actual uses.  This should be established immediately through appropriate data use or collaboration agreements.  Once data has been analysed and downstream insights generated (e.g., innovations in diagnostics), patents and trade secrets can be pursued to protect those insights.

A priority for this mindset includes defining the specific purposes for which the company intends to utilise its data.  This involves articulating the scope of desired data rights before entering negotiations for data-sharing, collaboration, or co-development agreements.  Failing to define these rights upfront can lead to complications later on.  If broader rights are necessary after agreements are in place, renegotiating with the data holder may be unpleasant or even impossible, as the data holder would possess all the leverage.  Although planning may require time and financial investments, it simplifies execution and reduces expenses in the long run.

The consequences of seeking forgiveness can be devastating.  For example, a company that lacks proper rights to just 2% of its training data after launching an AI-driven product might encounter significant setbacks.  It could be compelled to remove the product from the market, disgorge that 2% of training data, retrain the AI, and/or resubmit it for regulatory approval, incurring delays and expenses.  Thus, from a due diligence perspective, all stakeholders and developers must understand the importance of a robust data plan.  This alignment ensures clear expectations and fosters continuous improvement over time.

A robust data plan also involves implementing a trade secret programme.  This consists of a series of practical steps.  First, companies must identify and define confidential information, including data, algorithms, processes, and know-how.  This requires a thorough inventory of critical assets and precise documentation of their confidentiality.  Next, access controls should be established to limit access to sensitive information on a need-to-know basis.  This can involve physical security measures, such as restricted areas and secure data storage, and digital security measures, such as password protection, encryption, and access logs.  Employee training is crucial to ensure everyone understands the importance of protecting trade secrets and their role in maintaining confidentiality.  This includes educating employees on company policies, procedures for handling confidential information, and potential consequences of breaches, as well as implementing legally binding contractual obligations with employees, contractors, and partners.  Finally, documenting all trade secret policies and procedures is essential for maintaining a consistent and enforceable system.  This documentation should be regularly reviewed and updated to reflect changes in the company’s operations and the evolving legal landscape.

By implementing a comprehensive trade secret system, healthcare and biotech companies can protect their valuable intellectual property and maintain their competitive advantage in the rapidly evolving landscape of AI-driven innovation.

Lastly, many healthcare innovations stem from research institutes and universities.  Treating each AI-driven research project as a commercial venture from the outset is a valuable approach.  This mindset encourages entrepreneurship-minded researchers to integrate planning and safeguards early, such as executing proper data strategies and securing necessary agreements.  Preparing for commercialisation from the beginning can pave the way to market success.  After all, it is always better to be prepared than to scramble later.

3. Data Privacy and Security

Breaches or non-compliant use of patient data and other sensitive information can severely damage a company’s reputation and that of its stakeholders.  More critically, such incidents have profound consequences for end-users, eroding public trust and potentially leading to significant legal liabilities.

Unfortunately, the healthcare and biotech industry is a prime target for cybercriminals due to the vast amount of valuable personal information it holds.  Recent data breach incidents have heightened concerns among healthcare companies and organisations about collaborating with startups and third-party vendors due to perceived breach risks.

For any AI-centred product, demonstrating robust data security measures is essential to inspiring confidence in potential partners.  Measures such as regular audits, maintaining detailed security records and data trails, and employing state-of-the-art encryption solutions can set a program apart.  These practices demonstrate a commitment to data privacy and security, highlighting the program developer as a responsible partner who prioritises security over growth at all costs.

AI companies and programs must be prepared to answer critical questions about their AI-driven products, such as: Where will the product operate – on-premises or in the cloud?  If in the cloud, is it a hybrid cloud or multi-cloud architecture?  How is data stored?  How will data governance evolve as the company scales?  Providing clear and thoughtful answers to these questions boosts credibility with experienced stakeholders and potential partners and fosters a cultural shift in the industry driven by prioritising security and responsibility.

Awareness of protecting data, preventing breaches, and complying with privacy regulations – such as HIPAA and state laws in the U.S. that exceed HIPAA’s requirements – has grown significantly.  The real test, however, is whether AI companies and AI programs can translate this awareness into a robust, actionable data protection plan and execute it effectively.

As explicitly applied to startups, a small company with the same data protection infrastructure and commitment as a prominent organisation gains a tremendous competitive edge.  Startups can argue that large entities are primary targets for hackers, not small companies.  By proposing that the startup handle sensitive data analysis and protect insights within its infrastructure, entrepreneurs can underscore how this approach is safer for the data and its derived insights.  The consequences of security breaches – whether for a large entity or a small company – are severe, encompassing financial losses, resource drains, legal liabilities, and possibly product injunctions.  Startups can leverage this reality to present a compelling value proposition: they help large entities mitigate these risks.

Patients’ growing concerns about data rights also drive demand for privacy-first solutions.  AI companies and AI programs that prioritise patient privacy can unlock tremendous opportunities.  One effective way to achieve this is by focusing on data minimisation – collecting only the data necessary for specific purposes.  Unlike data hoarding, which collects excessive and unnecessary data and increases breach risks, data minimisation reduces risk and liability.  AI companies and AI programs that adopt this approach and communicate their focused data needs are more likely to gain the trust of data holders and secure collaborations.

4. AI Explainability and Trust

Trust in AI-powered diagnostic and treatment tools can often be centred around the “black box” nature of AI algorithms.  However, the “black box” nature should be considered acceptable because some algorithms are inherently complex and challenging to interpret.  However, AI-driven biotech companies must still strive to understand their AI models and make them as explainable as possible.

Explainability is essential for building trust.  The best developers articulate how their AI works in simple, straightforward language.  Investors, in particular, value this transparency.  They are also keen to understand AI’s limitations, as every solution has biases, error rates, and scenarios where human oversight or intervention is necessary – or where AI may not apply.

The willingness of leaders or their technical teams to openly discuss these limitations demonstrates maturity and realism that can be pivotal in gaining an investor’s trust.  Transparency about the strengths and weaknesses of an AI solution positions a company as credible, trustworthy, and prepared to navigate the challenges of AI implementation effectively.

The tech industry often accepts AI’s “black box” nature, which refers to the opacity of its internal workings.  This is because the focus is usually on AI’s functionality rather than a deep understanding of how it works.  This lack of emphasis on transparency aligns with the open-source culture prevalent in the tech sector, which prioritises the sharing and modifying of code, even if the code’s inner workings are not fully understood.

However, the healthcare and biotech industry demands a different level of transparency.  Disclosure requirements for adoption and investment in healthcare are significantly higher than in tech, not to mention the additional regulatory approvals not present in tech.  Healthcare providers and payers want to understand how and why an AI solution works before committing to it.  Despite this need for clarity, companies should not have to disclose the mathematical formulas behind their AI algorithms to gain adoption from payers, as this often goes beyond the knowledge base of decision-makers and is usually objectively less critical than understanding the inputs (data, prompts, variables) and outputs (scoring, decision matrix, insights, etc.) of the AI.

Companies should focus on explaining how these inputs are used in the AI model and how they generate actionable outputs.  They can avoid disclosing proprietary details, such as parameter weighting (e.g., biomarker weighting) or neural network configurations, as these are not typically critical to decision-making.

A company with a unique AI algorithm should protect it as a trade secret rather than disclose it.  This is partly because AI algorithms, based on mathematical formulas, are likely not patentable and, even if patentable, are often difficult to reverse engineer in competitor products.  The biotech market usually overvalues the AI “black box”, even though many algorithms are off-the-shelf solutions or may soon be.

Over time, the value of the AI “black box” will likely diminish, with the focus shifting to factors like unique data sets, novel combinations of variables under investigation, and the clinical insights derived from such data.  Ultimately, the actual value lies not in the algorithm itself but in the uniqueness and quality of the data and insights.

A company can make a compelling argument that the specific details of its AI model – whether a static algorithm or a neural network – are less relevant than the clarity of what goes into the model, what comes out of it, and the soundness of the methodology.

Patent law for AI-enabled diagnostic inventions supports this approach to disclosure.  For instance, a company investigating a unique combination of five biomarkers using a proprietary data set could obtain patent protection by describing the biomarkers, the distinctiveness of the data, the questions addressed, and the novel insights gained (e.g., a unique biomarker combination).  There is no requirement to disclose the precise mathematical steps (e.g., the weighting of biomarkers) used by the AI model.  This strategy ensures the company protects its proprietary technology while meeting legal and commercial needs for transparency.

5. Regulatory Uncertainty and Innovation

Food and Drug Administration (FDA) clearance or approval holds significant value in the healthcare and biotech industry.  Companies developing AI-enabled products are strongly encouraged to immediately schedule meetings with the FDA.  Collaborating with the FDA throughout product development fosters trust and ensures alignment with regulatory expectations.  The current regulatory framework prioritises data over innovation; the FDA will not grant clearance merely because an AI model is novel.

Encouragingly, the FDA has indicated a growing willingness to engage with these AI-enabled product companies.  This presents an opportunity for these companies to advocate for more adaptive and flexible regulations, including more explicit guidance on regulatory expectations, accelerated approval pathways tailored to address unmet needs, and additional measures that support innovation without compromising safety or efficacy.

It is worth noting that the current regulatory framework faces several challenges, including a shortage of personnel and expertise at regulatory agencies, a limited deep technical understanding of AI technologies, and the influence of industry groups with various and sometimes conflicting incentives.

Analysing improvements to the regulatory framework requires examining what the FDA guidance scrutinises versus what it has not emphasised.  For instance, the FDA places significant focus on change control, requiring AI companies to outline detailed plans for how their AI systems will learn and when updates will be implemented.  While these strict versioning requirements may work for technologies like smartphone operating systems, they are less suited to the dynamic nature of AI platforms.

Conversely, the current guidance lacks requirements for regulatory due diligence regarding the source of training data.  This gap poses risks if a company improperly uses (intentional or not) part of its training data to develop an AI-driven product.  In such cases, the company might struggle to rectify the issue retroactively and be forced to relinquish the improperly used data.  This scenario could render the AI model unusable, leading to the de-marketing of the product and significant consequences for customers and stakeholders alike.  The Federal Trade Commission has already employed penalties such as algorithm disgorgement, which can negatively affect these AI-enabled products and corresponding companies.

Some may argue that the regulatory agency should adopt a more rigorous “gatekeeping” approach upfront.  By asking the right questions during the approval process, the agency could place more significant pressure on companies at the front end.  Once approval is granted, the agency can trust its processes and the companies that have cleared the approval threshold.  For post-approval updates and changes, a reporting structure could replace the need for repeated upfront reviews.  This front-end-heavy strategy would better align with the business models of AI-driven companies while demonstrating greater trust from the regulatory agency in its processes and in the companies it approves.

To foster innovation while ensuring safety and efficacy, regulatory sandboxes and pilot programs are gaining traction as valuable tools for AI-driven healthcare solutions.  These controlled environments allow companies to test their technologies in a real-world setting with a limited scope, providing helpful feedback and data to inform regulatory decision-making.  By offering a safe space for experimentation and collaboration with regulatory agencies, sandboxes can accelerate the development and validation of AI solutions while mitigating potential risks.  Furthermore, international cooperation is crucial in harmonising regulatory approaches to AI in healthcare.  By sharing knowledge, best practices, and regulatory frameworks, countries can work together to establish globally applicable standards, reducing barriers to innovation and promoting the safe and effective adoption of AI technologies across borders.  This collaborative approach can foster a more consistent and predictable regulatory landscape, enabling companies to navigate the complexities of AI regulation more efficiently and bring their innovations to patients worldwide.

This presents an opportunity for these companies to advocate for more adaptive and flexible regulations, including more explicit guidance on regulatory expectations, accelerated approval pathways tailored to address unmet needs, and additional measures that support innovation without compromising safety or efficacy.  Initiatives like the National Security Commission on Emerging Biotechnology further underscore the increasing importance of biotech in national security.  With Senator Todd Young at the helm, the Commission is poised to play a crucial role in shaping policies that balance innovation with national security concerns.  This highlights the growing need for biotech companies to engage with policymakers proactively and contribute to developing a regulatory environment that fosters progress and security.  However, the dense and complex regulatory landscape can favour established players.

6. IP Protection for AI Innovations

First, data is now a critical asset, no longer a “back office” matter that company leaders can afford to overlook.  Company leaders must be resourceful and capital efficient, planning and implementing robust patent and trade secret strategies at every growth stage.

A secure and well-executed data asset strategy significantly enhances a company’s appeal to stakeholders and investors, increasing the likelihood of securing funding or advancing a program within a larger company.  Additionally, it boosts the company’s attractiveness as a partner for collaborations.  These advantages, in turn, strengthen the company’s overall position by providing it with more assets to build.

The key to standing out lies in having a clear strategy, a detailed plan, and a compelling narrative about how the company has implemented both.  Leaders who can demonstrate that they have secured their data assets from day one – backed by a thoughtful approach and consistent execution – are far more likely to differentiate themselves from the competition.

These datasets are generally protected as trade secrets, not patents.  Patents may protect how data is used to discover insights, including the processes and methodologies employed and the outputs derived from AI analysis – such as biomarkers, quantitative diagnostic tests, drug targets, and other clinical insights.  Other elements, including the datasets, are better safeguarded as trade secrets.

Regarding the “black box” issue discussed above, patent filings should avoid including AI algorithms or architecture.  Algorithms themselves are not patentable under patent law, as mathematical formulas cannot be patented.  However, neural network architectures may be patentable.  In the sporadic cases where the company must disclose its neural network architecture to gain adoption, patent protection may be considered.  If disclosure is unnecessary, which should often be the case, the architecture can remain a trade secret.  If disclosure is required, the company should patent it for protection.  Further, detecting infringement may pose challenges for enforcing a neural network patent, as competitors are unlikely to reveal their proprietary architectures.  As such, the “black box” nature can complicate the enforcement of such patents.

Companies should establish a trade secret framework before ingesting datasets or generating output data to implement trade secret protection effectively.  This framework can be relatively simple but still requires a well-thought-out strategy, proper education, and alignment across the organisation.  Companies should incorporate trade secret terms into their data-sharing agreements, addressing critical aspects such as how data is used, audited, stored, and ultimately destroyed or returned.

In practice, CEOs typically do not negotiate or execute data-sharing agreements.  This responsibility usually falls to corporate development professionals or contract managers, who must thoroughly understand these principles as an extension of the company’s IP strategy.  As a cautionary note, the most significant risks to innovations and trade secrets stem from human errors, often due to a lack of education.  If employees fail to grasp disclosure requirements – what can and cannot be disclosed – any IP program, whether focused on patents or trade secrets, may ultimately fail.

In conclusion, patents are straightforward because they involve deliberately disclosing known innovations.  Trade secrets are also straightforward because they involve deliberate non-disclosure.  The real challenge is ensuring that every organisation member understands and aligns with these principles.  Even the most robust IP strategies can fall apart without this unified understanding.

7. Leadership and Culture

There was a time when an entrepreneur or product leader could succeed simply by being a visionary with innovative research to transform into a product.  However, the landscape has evolved.  Today, successful companies typically merge diverse talents, including data scientists, engineers, and wet lab scientists.  This shift requires leadership to adopt a product-centric mindset.  To do this effectively, they must become fluent in the language of data assets, allowing them to make informed strategic decisions from the beginning.  The best leaders are creating a new playbook for navigating this complex environment.

AI-driven biotech companies are expanding rapidly, with speed, disruption, and scalability at the forefront of leadership’s priorities.  Yet, leadership must remember that they operate within the healthcare and biotech industries, not purely in the tech ones.  In this space, prudence cannot be sacrificed for speed.  Increasingly, value is less about the algorithms themselves and more about their applications and the deep domain expertise that drives them.

The first principle for company leaders is understanding that a data strategy must precede both IP and employment strategies.  A well-defined data strategy lays the foundation for aligning and informing these other critical strategies.

First, when negotiating for data rights, a clear data strategy enables a company to know precisely what it needs the data for and why.  For instance, if a company focuses on liver, pancreatic, or lung cancer, a data strategy ensures the company secures the necessary rights to pursue these fields.  A leader who prioritises data strategy from the start and provides the company with data for specific purposes with proper rights fosters the right transactional culture.

Second, once data is collected, a leader educated in data rights can proactively address potential regulatory concerns.  This diligence allows the company to identify and resolve flaws early, avoiding situations where regulatory agencies uncover issues too late for the company to correct its course.

Third, alignment across the organisation is essential.  If a company’s culture emphasises education on data strategy, the execution of that strategy becomes more seamless.  Alignment ensures everyone in the company understands and supports the strategy, minimising conflicts and missteps.

For example, imagine an internal company meeting where the head of Software Engineering, head of R&D, and IP Counsel discuss disclosing the company’s new code.  The head of Software Engineering, with experience in big tech, advocates for open-source sharing, viewing the code as non-innovative and part of a collaborative field.  Meanwhile, the head of R&D and IP Counsel, guided by the company’s data strategy, argue for protecting the code as a crucial component of their data-driven approach, emphasising its potential for generating proprietary insights and competitive advantage.  This scenario highlights the importance of a shared understanding of the data strategy, ensuring that decisions are made in alignment with the company’s overall goals.

In this data-saturated world, leadership transcends traditional scientific expertise.  It demands the creation of a data-fluent organisation where every member understands the transformative power of data and is empowered to wield it responsibly.  Leaders must:

  • Empower Data Citizens: Invest strategically in training and development programmes that equip employees at all levels with the skills to interpret, analyse, and effectively utilise data.  Foster a culture of intellectual curiosity and data-driven decision-making, transforming every employee into a valuable contributor in the data ecosystem.
  • Orchestrate Collaboration and Shatter Silos: Actively break down information silos and cultivate cross-functional teams that seamlessly integrate diverse expertise – scientists, engineers, ethicists, and business leaders – to collaboratively tackle complex challenges.  Encourage open communication and transparent data sharing across departments, creating a unified force for innovation.
  • Lead with Transparency and Forge Trust: Communicate openly and honestly about the organisation’s data practices, AI development processes, and ethical guidelines.  Build unwavering trust with patients, partners, and the public by demonstrating an unwavering commitment to responsible data stewardship and ethical AI development.
  • Embrace Agile Experimentation and Drive Innovation: Foster a dynamic culture of experimentation and iterative learning with data.  Encourage employees to explore unconventional ideas, rigorously test hypotheses, and learn from both successes and inevitable setbacks.  In this environment, failure is not feared but viewed as a valuable stepping stone on the path to groundbreaking discoveries.

The biotech leaders who not only survive but thrive in this AI-powered landscape will be those who build organisations that are not just data-rich, but truly data-driven – where every strategic decision is informed by actionable insights, every process is optimised by intelligent algorithms, and every member of the team is empowered to contribute to a future of healthier lives.  These are the leaders who will define the future of biotech, not just reacting to the data revolution, but actively shaping it.  They will be the architects of a new era of medicine, where data is not just a resource, but the very lifeblood of innovation.


Production Editor’s Note

This chapter has been written by a member of ICLG’s international panel of experts,
who has been exclusively appointed for this task as a leading professional in their field by Global Legal Group, ICLG’s publisher.
ICLG’s in-house editorial team carefully reviews and edits each chapter, updated annually, and audits each one for originality, relevance and style,
including anti-plagiarism and AI-detection tools.
This chapter was copy-edited by Maya Tyrrell, our in-house editor.

link

Leave a Reply

Your email address will not be published. Required fields are marked *