Is Your IP Practice Leveraging AI Innovation? Should It?

(as published in IPWatchdog)

Breakthroughs in artificial intelligence (AI) and quantum computing are being announced at a rapid pace. At the same time, we’re seeing more and more legal disputes related to these emerging and highly competitive markets. Just this month, xAI sued a former engineer alleging theft of trade secrets tied to its Grok AI platform. Meanwhile, in quantum computing, Japanese scientists cracked the longstanding ‘W state’ entanglement problem, raising new possibilities for teleportation, and making hardware IP more valuable than ever.

Yet, beyond the headlines, the business realities are sobering. According to the 2025 State of AI Cost Management, 80% of enterprises miss their AI infrastructure forecasts by more than 25%, and 84% report significant gross margin erosion tied to AI workloads.

Start-ups and established players alike market solutions claiming to be the most mature, well-tested, secure, and innovative. But in the race to be “first to market” it’s possible to overlook critical legal matters that may have long-term implications. Just a few of the important questions that should be asked by every company, in-house attorney, law firm, and even individual include: Should we be using these new technologies? Are we really falling behind if we don’t? And what should be considered when making the best decision to adopt or not?

1. Don’t fall for hype – AI is still maturing.

If the promises made in marketing many of these solutions were true, we would not need any further advancement. The marketed solutions promise to produce the best results, the best content, and the best safeguards. However, every solution is just another tool and should be evaluated as any other tool. The potential to drive real efficiency is there, but often the reality falls short. How can you make sure that your company and customers are getting an actual solution rather than well-packaged marketing? Or, if you are a law firm, that your solution meets professional standards and does not expose your clients to undue risk?

2. Know thyself and thy clients.

Organizations, whether law firms, counsel, or corporations, fall across a broad spectrum of risk tolerance. While law firms and larger organizations tend to be more conservative, start-ups and companies who are themselves producing cutting-edge products tend to be more aggressive in adopting new technologies. Where do you, your customers, or your clients fall on the spectrum?

Trailblazers push aggressively into emerging technologies. They face higher risks but may secure a competitive advantage and often attract clients seeking new solutions.

Measured Adopters balance efficiency gains against solution maturity, vendor reputation, and legal safeguards. They are slower to take leaps than trailblazers are, but that makes them safer.

Risk-Averse organizations will not adopt new technologies that have not been well-tested. They minimize immediate risk, and although that risk aversion creates the risk of obsolescence and market irrelevance over time, they are unlikely to adopt tools that create issues.

Once you understand your company or client’s risk-tolerance profile, the next step is to have the right decision-makers and advisors determine whether to adopt the new solution.

3. Build a multi-disciplinary team.

Asking the right people the right questions is an essential start. Look to your internal teams to identify the stakeholders for adopting new solutions. A multi-disciplinary team with diverse perspectives and expertise is essential to identify risks and opportunities that might otherwise be missed. Consider including a legal perspective, risk management, business development, IT, product teams, and the support teams for all of the above.

Legal teams: Should include senior and junior perspectives, as the strategic value will be clear to senior personnel, while the newer members can advise on adoption and usability. Corporate adoption should consider in-house personnel.

IT department: Can advise on security, industry benchmarks (SOC compliance, Security Certification, Professionalism), data confidentiality, architecture (e.g., export control – where are the servers located), local service, cloud service, etc.);

Users: Do not overlook the people who will actually use the new solution. New features may be brilliant but need to be accessible to have value. Feedback from users can tune a bad solution into a good or great product. Leverage the ability to adapt AI solutions to your specific needs.

Leadership: Leadership must ensure that technology adoption does not sacrifice best interests, your intellectual property, and returns value on investment. There are numerous pitfalls to be avoided. Adopting third-party AI tools risks revealing your trade secrets to solution providers and even making your developments publicly available. Plan for these risks and develop benchmarks to measure value for your solution.

4. Identify whether the solution is actually a solution.

The best approach in determining whether AI technology is really offering a solution, or is just cleverly packaged marketing, is to treat AI technologies as your company or firm would treat any traditional software adoption:

  • Evaluate the solution against known standards, including security certifications, architecture, and availability. Evaluate how the solution provides guardrails for security, data confidentiality, ownership of IP, cost, and rational “Terms of Use” that protect your data and rights.
  • Undertake rigorous piloting and testing – ideally on known, publicly available information, and compare. Note that local solutions lessen the risk of data loss and exposure, but cross-contamination within your own company remains a risk (having a bad security policy can expose the company’s confidential information).
  • Compare solutions and output produced directly, and track investment (time) to produce outputs.

Note that rigorous and defined testing plans are highly recommended but rarely used, and that any framework definition for testing criteria will improve results.

5. Determine essential considerations for your company and/or clients.

Carefully examine the solution’s terms of use, focusing on data use and privacy:

  • Make sure that input data is not shared or available, even to other users at the same company – for example, whether the solution includes local device encryption of data.
  • Ensure that input and output data are not used to train.
  • If applicable, note the legal requirements in the United States and of jurisdictions outside of the United States – state requirements may vary, and in the European Union (EU) broad disclosure requirements may apply – for example, consider the European Patent (EP) requirement to identify and/or publish what data models and AI is being used.
  • Take notice of the solution architecture: Is it cloud-based? If so, it’s important to restrict processing to “local” jurisdiction (for example, export control laws vary by jurisdiction).
  • Ensure that IP is protected, and that you maintain ownership: all interactions, inputs, outputs, explicit prompts, persona derived from input, etc. should be owned by and accrue to the user.

6. Weigh the risks and rewards.

When assessing whether to adopt an AI solution, weigh the costs, risks and uncertainties of the new technology and its risk of becoming obsolete with the benefit to your company or client of improving performance, reducing friction and optimizing workflows.

In weighing the risks and rewards, it’s important to ask:

  • Do you need to change how you train your teams?
  • Are the solutions providing improvements, efficiency, and any measurable contribution at all?
  • What are other players in your space doing?
  • Which solutions are showing the most promise?
  • What spaces include the most risk?

Generative AI

The promises of generative AI include productivity gains, improved workflows, and creative outputs. Keep in mind that open-source model use may breach license restrictions, creating hidden liability. And its risks can be serious as there can be litigation exposure as the recent Anthropic/Claude lawsuit highlights. [1] Although the Court found the underlying uses transformative, the law in this space may see significant changes and potential conflict between jurisdictions.

Safer Applications of AI

Not every implementation is high-risk. Administrative, operational, and maintenance tasks often involve repetitive processes that do not involve sensitive data. Here, AI adoption can deliver immediate efficiency gains with lower exposure, provided that any AI solutions are monitored, and trained on “safe” data.

It’s All About Balance

Adopting new technology and the plethora of AI solutions isn’t about chasing hype—it’s about aligning with a long-term growth strategy, risk tolerance, and client needs.

Trailblazers should expect turbulence but may reap early rewards. Measured adopters can achieve steady gains while managing exposure. The risk-averse can avoid pitfalls but must guard against stagnation. Every organization must decide where it falls on this spectrum and vigorous adoption procedures regardless. Balancing speed with adopting wisely—protecting IP, data, and reputation – will allow balanced implementation tuned to your risk tolerance.