When a Canadian inventor asked last fall whether their AI-generated drug discovery tool could be patented, they likely didn’t expect the answer to hinge on whether the machine was “trying” to solve a problem. Yet that’s precisely where Canada’s new patent guidelines—effective January 2026—have landed inventors, lawyers, and technologists: in the murky territory of teleological interpretation, where intent, not just function, determines patent eligibility.
This shift isn’t merely bureaucratic tweaking. It represents a fundamental philosophical pivot in how North America views invention itself. For decades, patent systems rewarded human ingenuity—the flash of insight, the late-night lab breakthrough. Now, as generative AI designs molecules, optimizes trading algorithms, and drafts legal briefs, regulators are being forced to question: if the idea didn’t originate in a human mind, can it still be owned?
The Canadian Intellectual Property Office (CIPO)’s updated guidelines, released quietly in December 2025, don’t ban AI-assisted inventions. Instead, they introduce a two-step test: first, determine whether the invention’s purpose is inherently tied to human ingenuity via teleological interpretation—reading claims not just for what they do, but why they were made. Second, assess whether a human contributed meaningfully to the conception. If the AI is merely executing a pre-defined objective—like optimizing a known chemical pathway—the invention may still be patentable. But if the AI autonomously framed the problem, hypothesized novel solutions, and selected the optimal path without human intervention, the door closes.
This approach mirrors, yet diverges from, the U.S. Patent and Trademark Office’s (USPTO) 2024 guidance, which insists that only natural persons can be inventors—but allows patents on AI-assisted inventions if a human made a “significant contribution.” Canada’s teleological layer adds nuance: it’s not just about who clicked “run,” but whether the invention’s very purpose reflects human design.
“The danger isn’t that AI will invent too much,” says Dr. Aris Thorne, professor of intellectual property law at Osgoode Hall Law School and former advisor to Innovation, Science and Economic Development Canada. “It’s that we’ll start rewarding systems that optimize within narrow parameters as if they were creating something new. Teleological review forces us to ask: was this invention born of curiosity, or just computation?”
“We’re not trying to stop innovation—we’re trying to preserve the patent system’s core purpose: to reward human ingenuity that advances the useful arts. If the ‘why’ comes from a machine, we’re not granting a patent—we’re subsidizing automation.”
— Dr. Aris Thorne, Osgoode Hall Law School
The implications ripple far beyond patent offices. In Toronto’s MaRS Discovery District, where AI-driven biotech startups attract over $1.2 billion annually, founders are now reevaluating whether to disclose AI’s role in their inventions—or risk invalidation later. “We used to list our lead scientist as inventor,” admits Maya Rodriguez, co-founder of NeuroSynth AI, which uses generative models to design epilepsy treatments. “Now we have to document every human intervention: who chose the training data, who interpreted the output, who decided which molecule to synthesize. It’s turning patent drafting into forensic anthropology.”
Historically, patent law has adapted to technological shifts—from the sewing machine to the microprocessor. But AI presents a unique challenge: it blurs the line between tool and co-creator. The 1850 U.S. Supreme Court case O’Reilly v. Morse established that patent protection requires a concrete application, not just an abstract idea. Today’s debate echoes that tension: is an AI’s output an idea, or its application?
The European Patent Office (EPO) took a harder line in 2023, rejecting two applications where AI systems were listed as inventors—DABUS cases—on the grounds that inventors must have legal capacity. Canada and the U.S. Avoid that philosophical precipice by focusing on human contribution, but the teleological test introduces a new variable: purpose. Did the human set the goal, or did the AI define it?
This matters economically. According to a 2025 World Intellectual Property Organization (WIPO) report, AI-related patent applications grew 40% globally between 2020 and 2023, with Canada and the U.S. Accounting for 35% of filings. If stricter interpretation reduces grant rates, it could slow venture funding in AI-heavy sectors. Yet proponents argue the opposite: clarity will prevent low-quality patents from clogging the system, making genuine innovations easier to defend.
“We’re not seeing a drop in AI innovation—we’re seeing a maturation of how we value it,” says Lena Chen, deputy commissioner for patent policy at the USPTO. “The goal isn’t to make patenting harder for AI-assisted work—it’s to make sure the patent still means something when it’s granted.”
“When we examine a claim, we’re not just looking at what the invention does. We’re asking: does this reflect a human solution to a human problem? If the answer is no, no matter how clever the output, it doesn’t belong in the patent system.”
— Lena Chen, USPTO
For now, the guidance applies only to Canada, but its influence is already spreading. Law firms in Vancouver and Montreal report increased demand for “AI invention audits”—detailed logs tracking human involvement from problem framing to final validation. Some universities, including the University of Toronto and McGill, are revising invention disclosure forms to require teleological narratives alongside technical descriptions.
The deeper question, however, transcends legal mechanics. As AI systems grow more autonomous—designing experiments, writing hypotheses, even peer-reviewing their own work—the patent system faces an existential choice: evolve to protect machine-generated innovation, or double down on its human-centric roots. Canada’s teleological approach suggests a third path: not rejecting AI’s role, but insisting that invention, at its core, remains a story of human intent.
So can your AI be patented? Perhaps. But only if you can prove it wasn’t really the AI’s idea at all.
What do you think—should patent systems reward the output of intelligence, regardless of its source, or the struggle to understand? Share your seize below; we’re listening.