artificial intelligence Archives - IPOsgoode /osgoode/iposgoode/tag/artificial-intelligence/ An Authoritive Leader in IP Wed, 09 Oct 2024 16:45:37 +0000 en-CA hourly 1 https://wordpress.org/?v=6.9.4 Dr. Tesh Dagne Shines a Light on the Unseen Hands and Invisible (Copy)Rights Behind AI Systems /osgoode/iposgoode/2024/10/04/dr-tesh-dagne-shines-a-light-on-the-unseen-hands-and-invisible-copyrights-behind-ai-systems/ Fri, 04 Oct 2024 17:45:43 +0000 /osgoode/iposgoode/?p=40924 By bringing to the fore the roles of digital workers, Dagne hopes to unearth the collaborative creation that goes into the AI production chain and feeds into the AI output.

The post Dr. Tesh Dagne Shines a Light on the Unseen Hands and Invisible (Copy)Rights Behind AI Systems appeared first on IPOsgoode.

]]>
By ‘Damola Adediji
A professional headshot of Tesh Dange
Teshager Dagne, Ontario Research Chair
and IP Osgoode Affiliated Researcher

Artificial intelligence systems often “give the vibe” of complete automated processing without human involvement. However, as reminds us, upon a closer “vibe check” there are layers of unseen and under-appreciated human inputs, efforts, and labour involved. The efforts of those unseen human hands are, in fact, the engine of AI innovation.

Dr. Dagne is the Ontario Research Chair in Governing Artificial Intelligence and an Associate Professor at 첥Ƶ’s new Markham campus in the School of Public Policy & Administration. He also teaches Property Law at Osgoode Hall Law School, where he is an Affiliated Researcher with IP Osgoode. His current project, which he recently presented at the at the University of Cape Town, highlights how copyright enables the proactive exploitation of digital workers’ contributions as inputs to AI training or, in some cases, AI-assisted outputs.

By bringing to the fore the roles of digital workers, Dagne hopes to unearth the collaborative creation that goes into the AI production chain and feeds into the AI output. His paper, “Unseen Hands, Invisible Rights: Unmasking Digital Workers in the Shadows of AI Innovation and Implications for the Future of Copyright Law”, is soon to be published in a forthcoming volume on IP’s Futures: Exploring the Global Landscape of Intellectual Property Law and Policy (Ottawa UP, 2025), which Dagne is co-editing with and . His chapter probes the future of copyright law, attempting to turn the focus of copyright to collaborative authorship. This move, Dagne argues, could respond to demands for the fair allocation of rights between digital workers, as authors or joint authors in some cases, and AI designers as exploiters of digital works. 

Digital Workers are the Lifeblood of AI Development

As , “[AI] doesn’t run on magic pixie dust… [AI training] is a job that actually takes quite a bit of creativity, insight, and judgment.” Such ingenuity involves the preparation of data works for the datasets used to train and build AI technologies, which consists of a number of decisions as to the kind of data to collect, curate, clean, label, abstract, index, etc. The process of dataset development starts with formulating the problem, which is the conceptualization of the machine learning task by making the problems “into questions that data science can answer”. The task conceptualization is typically the responsibility of the AI designer, which may be an AI company like Open AI or Anthropic AI, for example, or platform company like Microsoft, Meta, or Amazon. After the conceptualization process comes the data collection, refining, and measuring stage. Dagne’s focus is on the “digital workers” who enter the picture at this stage in the AI production process.

According to these digital workers contribute to the training process of AI systems in three steps: generating and annotating data (AI preparation), verifying model output (AI verification), and directly mimicking model behaviour to produce a service (AI impersonation).  They range “from higher-skilled, ‘macro-task’ […] workers [who] offer their services as graphic designers, computer programmers, statisticians, translators, and other professional services, to [those engaged in] ‘micro-task’ [work] which typically involve clerical tasks that can be completed quickly and require less specialized skills.” () As described by , “complex projects are broken down into smaller, easily accomplished tasks, which can then be distributed to a large number of workers.” Micro-task activities mainly involve the AI preparation aspect of AI training processes but can also include the AI verification and AI impersonation steps in AI training.

The Copyright Question

Much of the debate around copyright and AI has focused on whether using the underlying work of which inputs are constituted (the images, texts, musical works and other subject matter) for unauthorized learning constitutes copyright infringement. However, Dagne’s focus is on the copyright that can subsist over collected data, as we see in some and cases, and whether digital workers’ activities in the preparation of training data sets in the AI pipeline could itself give rise to a copyright interest. This question can be answered by examining the nature of digital workers’ contributions to the tasks assigned to them and the ownership of copyright under the contractual agreements that digital workers sign with platforms.

Digital workers in the AI production value chain collect raw data and help add extra meaning by associating each piece of data with relevant attributive tags. Although have argued that this attributive task is a mundane exercise that could ultimately be automated, others like have contended that tasks such as attribution will always be assigned to humans because of their capacity to recognize and classify data. Indeed, human intervention is now in demand to recognize the nuances and sophisticated details of specific data. As noted by , an example of such demand is in the medical field, where an understanding of scientific vocabulary is required.

From a doctrinal perspective, the copyright question is whether the contribution of digital workers described above meets the threshold of originality—which is defined, in Canadian law, by the Supreme Court of Canada’s ruling in , and requires more than trivial skill and judgment in the selection or arrangement of data. If so, we might ask whether recognizing the copyright status of such contributions could address these workers' invisibility. Even if, on account of originality, the tasks executed by digital workers amount to authorship, of course such authorship does not automatically translate into ownership. The ownership of the creative tasks conducted by digital workers as part of the collaborative venture is determined either by the workers’ status as employees or otherwise by contract—which means that it is determined in the context of significant power asymmetries and the routine exploitation of digital workers.

If copyright entrenches the inequities of an asymmetrical situation—by ensuring that the collective effort of digital workers in compiling essential datasets for AI training and AI development remains unseen and undervalued—Dagne thinks the time has come to confront its complicity. He suggests that, spurred by the arrival of AI, the copyright system needs to restructure the relationship between authors-as-(data)workers and corporate proprietors in pursuit of greater fairness.

‘Damola Adediji is a Visiting Researcher with IP Osgoode and Doctoral Candidate with the Centre for Law, Technology & Society at the University of Ottawa.

The post Dr. Tesh Dagne Shines a Light on the Unseen Hands and Invisible (Copy)Rights Behind AI Systems appeared first on IPOsgoode.

]]>
The US Copyright Office Clarifies that Copyright Protection Does Not Extend to (Exclusively) AI-Generated Work /osgoode/iposgoode/2023/03/29/the-us-copyright-office-clarifies-that-copyright-protection-does-not-extend-to-exclusively-ai-generated-work/ Wed, 29 Mar 2023 16:00:00 +0000 https://www.iposgoode.ca/?p=40725 Katie Graham is an IPilogue Writer and a 2L JD Candidate at Osgoode Hall Law School In March 2022, the Canadian Intellectual Property Office (“CIPO”) allowed its first artificial intelligence (AI)-authored copyright registration of a painting co-created by the AI tool, RAGHAV Painting App (“RAGHAV”), and the IP lawyer who created RAGHAV, Ankit Sahni. RAGHAV is the […]

The post The US Copyright Office Clarifies that Copyright Protection Does Not Extend to (Exclusively) AI-Generated Work appeared first on IPOsgoode.

]]>
Katie Graham is an IPilogue Writer and a 2L JD Candidate at Osgoode Hall Law School

In March 2022, the Canadian Intellectual Property Office (“CIPO”)  its first artificial intelligence (AI)-authored copyright registration of a painting co-created by the AI tool, RAGHAV Painting App (“RAGHAV”), and the IP lawyer who created RAGHAV, Ankit Sahni. RAGHAV is the first non-human “author” of a copyrighted work. However, Canadian courts  that “[c]learly a human author is required to create an original work for copyright purposes” (para 88). Though the AI tool is a co-author with a human, the registration suggests that both RAGHAV and Ankit Sahni can constitute an author under the copyright regime and  amongst Canadian artists. Though the landscape in Canada is still unclear, the US Copyright Office (“Office”)  a clarification on March 16, 2023, about its practices for examining and registering works that contain material generated by artificial intelligence (AI) technology.

The Human Authorship Requirement

The Office  that the term “author,” used in both the US Constitution and the Copyright Act, excludes non-humans. To qualify as a work of ‘authorship,’ a work must be created by a human being and works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author are not registrable. This threshold reflects the Canadian copyright regime,. The author  significant original expression to the work that is  to be characterized as a purely mechanical exercise.

The US Copyright Office’s Approach to AI-Generated Work

The Office provided important  on assessing the protectable elements of AI-generated works. It begins by distinguishing whether the ‘work’ is one of human authorship, with the AI tool merely being an assisting instrument, or whether the protectable elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were conceived and executed not by man but by a machine.

If the machine produced the expressive elements of the work, it is not copyrightable. This guidance is critical for  surrounding Chat-GPT, where the AI tool receives a prompt from the user, and the user does not exercise ultimate creative control of the output. The Office provided an  where a user instructs an AI tool to “write a poem about copyright law in the style of William Shakespeare”. Given that the user contributes little to no expressive elements to the AI-generated output, the output is not a product of human authorship or protected under the US Copyright Act.

However, the Office also  that, in some cases, AI-generated works might contain sufficient human-authored elements to warrant copyright protection. This may apply in cases where the human selects or arranges the AI-generated elements or modifies the AI-generated material to a degree where it constitutes original expression. The analysis seeks to determine whether a human had ultimate creative control over the expression and formed the traditional elements of authorship.

This guidance is in response to a recent review by the Office of a  titled “Zarya of the Dawn” containing human-authored elements combined with AI-generated images. While the Office  that the author, Kristina Kashtanova, owned the work’s text and the selection, coordination, and arrangement of the work’s written and visual elements, copyright protection did not extend to the images generated by the AI tool, Midjourney. Though Kashtanova edited the Midjourney images, the Office held that the creativity supplied did not constitute authorship.

How will this apply in Canada?

Given the registration of RAGHAV as an author under Canadian copyright law last year, it remains to be seen whether CIPO will follow a similar assessment as the US Office and revisit the decision to register an AI-generated work as a work of joint authorship. However,  question whether moral rights, which are not part of the US regime, will extend to AI authors and if AI authorship will alter the copyright term of the last living author’s death plus 70 years. The increasing traction of AI warrants similar guidance from CIPO regarding the status of AI authorship under Canadian copyright law.

The post The US Copyright Office Clarifies that Copyright Protection Does Not Extend to (Exclusively) AI-Generated Work appeared first on IPOsgoode.

]]>
Engineers Launch Free Access to AI Ethics and Governance Standards /osgoode/iposgoode/2023/03/15/engineers-launch-free-access-to-ai-ethics-and-governance-standards/ Wed, 15 Mar 2023 16:00:00 +0000 https://www.iposgoode.ca/?p=40679 The post Engineers Launch Free Access to AI Ethics and Governance Standards appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


The Institute of Electrical and Electronics Engineers (IEEE), a professional organization for engineers and technology experts, recently announced the launch of the IEEE GET Program, aimed at providing free access to AI ethics and governance standards. The program is part of IEEE's ongoing efforts to promote responsible AI practices and help organizations develop and implement ethical AI systems.

The opened seven standards for public access:

  1. Age-Appropriate Digital Services
  2. Addressing Ethical Concerns during System Design
  3. Transparency of Autonomous Systems
  4. Data Privacy
  5. Transparent Employer Data Governance
  6. Ethically Driven Robotics and Automation Systems
  7. Assessing the Impact of Autonomous and Intelligent Systems (“A/IS”) on Human Well-Being

Assessing the Impact of A/IS

The most cited of the standards is the , which addressed the growing concern of how autonomous and intelligent systems may affect society. This standard provides a structured approach to evaluating the impact of A/IS on individuals, communities, and society, and helps organizations ensure that their systems are developed and deployed in a manner that supports human well-being. Recommended practices aim to bring an increased awareness about well-being concepts and indicators for A/IS and an increased capacity to monitor, evaluate, and address the well-being impacts of A/IS. Successful application of the standard includes implementing the ability to evaluate the ongoing well-being impact of A/IS on users and stakeholders while continuing to improve the system to safeguard human well-being, resulting in a greater ability to avoid unintentional harm.

The Standard also suggested numerous domains of well-being and accompanying indicators that system designers should be concerned with. These domains pertain to individual well-being —satisfaction with life, affect/feelings, and psychological well-being, social well-being —community, culture, education, economy, health, and work, and regulatory domains —environment, government, and human settlements. The Standard noted that these suggestions are a starting point for selecting indicators and that “indicators should be adapted to fit the circumstances of measuring and gathering data about the well-being impacts for an A/IS on user(s).”

Ethics and Systems Design

The is also frequently cited. This Standard provides a set of guidelines and best practices for organizations that engage in system and software engineering to make value-based ethical system design and investment decisions.

A number of interesting points to consider are included within the Standard Model Process. The standard provides guidance to organizations on establishing key roles in Ethical Value Engineering Project teams. These teams are then tasked with defining how a system is expected to operate from the users’ perspective Concept of Operations — identifying stakeholders and determining the context of use and potential for ethical benefit or harm (Context Exploration). There is also an Ethical Values Elicitation and Prioritization Process, which aims to obtain and rank values and value demonstrators, followed by an Ethical Requirements Definition Process that guides the defining of value-based system requirements to reflect the prioritized core values and their value demonstrators. Finally, the Standard also sets out an Ethical Risk-Based Design Process and a Transparency Management Process, guiding the realization of ethical values and required functionality in designing a system and how to inform stakeholders of the system’s implementation of ethics.

Impact on AI

that these IEEE standards are already being incorporated into AI governance. For instance, the European Union's Artificial Intelligence Act (“EU AI Act”) references many of the components that the IEEE makes available in this package. This will likely continue to be relevant both for regulators and AI developers —“TÜV SÜD sees a strategic advantage for those looking to demonstrate eventual compliance to human-centric regulatory measures or market pressures to leverage these IEEE standards and certifications.” Developing ethical AI systems is a multifaceted problem which requires extensive deliberation by organizations involved with AI systems development. The release of free standards by an authoritative governing body will likely immensely benefit everyone involved.

The post Engineers Launch Free Access to AI Ethics and Governance Standards appeared first on IPOsgoode.

]]>
Who is responsible for discriminatory AI systems in healthcare? /osgoode/iposgoode/2023/03/09/who-is-responsible-for-discriminatory-ai-systems-in-healthcare/ Thu, 09 Mar 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40660 The post Who is responsible for discriminatory AI systems in healthcare? appeared first on IPOsgoode.

]]>

Mac Mok is a 3L JD candidate at Osgoode Hall Law School. This article was written as a requirement for Prof. Pina D’Agostino’s IP Intensive Program.


Artificial intelligence (AI) systems have entered every facet of our daily lives. The availability of massive data sets has been leveraged by AI systems to . The healthcare field has been no exception, with the Humber River Hospital now housing “”, an AI system that can track the flow of patients from their intake to discharge, and help healthcare providers make more informed decisions to improve overall efficiency and deliver better care.

, however, can produce discriminatory consequences. Examples include the Amazon recruiting algorithm that penalizes resumes containing the word “women” and the COMPAS algorithm used in criminal sentencing that was more likely to penalize African-American defendants. During AI system development, biases can appear in the training data when historical human biases contribute to the generation of such data, or when the data is imbalanced, that is, some groups are overrepresented and others are underrepresented.

can also impact healthcare AI systems. Recent regulatory efforts have begun addressing this issue. Health Canada, in a joint effort with the US Food and Drug Administration and the U.K. 's Medicines and Healthcare Products Regulatory Agency, has identified to inform the development of Good Machine Learning Practices (the dominant method used to train AI systems). In particular, the third guiding principle requiring that clinical study participants and data sets are representative of the intended patient population raises the importance of managing any bias. Another regulatory effort is the Artificial Intelligence and Data Act (AIDA) that was tabled as part of Bill C-27 to regulate the use of AI systems in Canada . of the AIDA puts the onus on the person(s) responsible for the AI system to “establish measures to identify, access and mitigate the risks of harm or biased output that could result from the use of the system”. Healthcare AI systems could also fall under such regulation.

The guidelines and regulations above put much responsibility to mitigate bias on the shoulders of AI developers. Logically, would be in the best position to recognize and address biases in the system as they have direct access to the training data and may correct important deficiencies. Arguably, however, such as the doctors seeking AI to solve a problem, and the end users of the system such as patients should also provide critical feedback to AI developers. Particularly in the healthcare field, where understanding training data and interpreting system outputs may require years of medical training, medical practitioners may play a key part in spotting biased inputs and outputs, allowing for correction of AI system deficiencies. Thus, the guidelines and regulations on preventing biased AI systems should consider what role doctors and patients have in developing responsible healthcare AI tools.

The post Who is responsible for discriminatory AI systems in healthcare? appeared first on IPOsgoode.

]]>
Synthetic Data: The Next Solution for Data Privacy? /osgoode/iposgoode/2023/02/23/synthetic-data-the-next-solution-for-data-privacy/ Thu, 23 Feb 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40612 The post Synthetic Data: The Next Solution for Data Privacy? appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


One contentious point from the session was synthetic data’s potential to solve the privacy concerns surrounding the datasets needed to train AI algorithms. In light of its increasing popularity, I will explore the benefits and dangers of this potential solution.

Concept

The data privacy concern that synthetic data aims to address is very similar to the purpose of — protecting anonymized data from being de-identified without reducing data utility. This is distinct from data augmentation, which is the process of adding new data to an existing real-world dataset in order to provide more training data, and could include rotating images or combining two images to create a new one. Data augmentation is typically not useful in the privacy context.

In a , the Office of the Privacy Commissioner of Canada (“OPC”) describes synthetic data as “fake data produced by an algorithm whose goal is to retain the same statistical properties as some real data, but with no one-to-one mapping between records in the synthetic data and the real data.” Synthetic data consists of real-world source data that is put through a generative statistical model, which is evaluated for statistical similarity to the source alongside privacy metrics. Critically, there is no need to remove quasi-identifying data, that is, data vulnerable to de-anonymization. This results in more complete datasets.

Benefits

Synthetic data uses a highly automated process to provide protection from de-identification using a highly automated process. This results in datasets that can be readily shared between AI developers without the dangers of privacy concerns. also points out that there are substantial cost savings. The points to how a synthetic data service company founder estimated that “a single image that could cost $6 from a labeling service can be artificially generated for six cents.” Synthetic data can also be manufactured to reduce bias by deliberately including a wide variety of rare but crucial edge-cases. Nvidia uses machine vision for autonomous vehicles as their example, but I think this concept should translate to improving representation of marginalized and under-represented groups in large datasets in healthcare or facial recognition. Many of the Bracing for Impact panelists shared this concern.

Dangers

The OPC notes in their blog many issues and concerns, particularly regarding de-identification. This is especially true if the synthetic data is not generated with sufficient care and if the “generative model learns the statistical properties of the source data too closely or too exactly”. In other words, if it “overfits” the data, then the synthetic data will simply replicate the source data, making re-identification easy.” Moreover, there is also concern with membership inference, where the fact that some individual data exists is an inherent risk. A also demonstrated that “synthetic data does not provide a better tradeoff between privacy and utility than traditional anonymization techniques” and “the privacy-utility tradeoff of synthetic data publishing is hard to predict.” This indicates that the characterization of synthetic data as a “silver bullet” is likely overselling its capabilities.

Implementations

Nvidia is using synthetic data in computer vision, but its primary purpose is not privacy — that there are other important functions for the technology. is a leading platform for synthetic data in healthcare and is . It is only beginning: it is predicted that “.”

Conclusion

Synthetic data has the potential to be highly beneficial, as it may be the answer to the many challenges AI developers face in sharing sensitive data. However, like many developments in AI technology, it requires caution and careful implementation to be effective and is potentially dangerous if relied upon haphazardly.

The post Synthetic Data: The Next Solution for Data Privacy? appeared first on IPOsgoode.

]]>
“Smart Nation” Building in Singapore /osgoode/iposgoode/2023/02/14/smart-nation-building-in-singapore/ Tue, 14 Feb 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40562 The post “Smart Nation” Building in Singapore appeared first on IPOsgoode.

]]>

Henry Rhyu is a 1L JD Candidate at Osgoode Hall Law School. This article is a summary of the author’s dissertation written as part of his program requirement for his MSc in Criminology at the University of Oxford.


After decades of political instability and economic turmoil during the 20th century, Singapore has advanced into one of the wealthiest countries in the world. , Singapore is committed to becoming the world’s first Applying the the People’s Action Party (PAP) has been increasingly integrating AI and other cutting-edge technology into addressing the city-state’s concerns, prioritizing ” cited as a primary sector when implementing this strategy.

What are “Xavier” Surveillance Robots, and Do They Help Minimize Human Bias in Police Decision-Making?

AI-powered surveillance robots are at the forefront of Singapore’s commitment to enhancing the safety and security of the city-state. On September 5, 2021, the government released that they will deploy the “Xavier” police robots as part of a 3-week trial. Developed together by the HTX and the Agency for Science, Technology and Research alongside several other government agencies, these twin artificial intelligence (AI) robots were stationed at a at the heart of Singapore. The robots were programmed to detect “ that amount to minor infractions, such as improperly parking a bicycle or smoking in forbidden areas.

As they gather more data with every novel type of infraction they are confronted with, the Xaviers continue to. - such as location, date, and time -these robots identify areas that demonstrate a statistical likelihood of exhibiting undesirable activity.

One societal benefit of the robots is combatting potential future shortages of human police, as well as allowing existing officers to allocate their limited resources.

Other proposed benefits are more difficult to verify. often describes the deployment of the Xaviers as a helpful method of reducing human bias in police decision-making, but this remains to be seen., argues that while AI is adept at recognizing patterns of behaviours, it fundamentally cannot explain nor question the logic underlying why they generate certain decisions.

Indeed, debates surrounding the potential for racial profiling have lead to pushback against predictive policing technology in some western countries. , the European Parliament prohibited the use of AI-powered preventive justice tools because they could generate racially biased outcomes. indicate that Singaporean citizens express similar concerns. One Singaporean human rights activist even stated that the Xaviers reminded her of ,” citing the potential for this surveillance technology to encroach on citizens’ right to privacy and due process.

What are the Existing AI regulations?

Presently, Singapore has . Instead, in 2019, the Personal Data Protection Commission - the national government-mandated body for AI-related concerns - established the a developed on behalf of Singapore-affiliated organizations that make use of AI in their company’s decision-making processes. This ethics framework states that 1) the decision-making process of AI technology must be “explainable, transparent, and fair,” and 2) that AI-based solutions must ensure that promoting the well-being of society is their number one priority.

Conclusion + Policy Implications

allocated to AI research in Singapore, surveillance technology is expected to continue to become more sophisticated in the city-state. Whether the existing AI regulatory framework effectively in safeguards against various potential unethical manifestations and implications of predictive policing technologies is beyond the scope of this article. However, one thing is clear. Singapore should remain wary of arming surveillance robots. While the Xaviers are not programmed to apply force against citizens, armed robots exist in other countries. the Dallas Police Department famously used a police robot to detonate a bomb against a suspect. Singapore must therefore identify and carefully straddle the fine line between using cutting-edge surveillance technology to enhance national security as opposed to providing the police with unfettered powers that risk violating citizens’ right to privacy and due process.

The post “Smart Nation” Building in Singapore appeared first on IPOsgoode.

]]>
US Supreme Court to Deal with the Patent Enablement Standard /osgoode/iposgoode/2023/02/13/us-supreme-court-to-deal-with-the-patent-enablement-standard/ Mon, 13 Feb 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40559 The post US Supreme Court to Deal with the Patent Enablement Standard appeared first on IPOsgoode.

]]>

Emily XiangEmily Xiang is an IPilogue Writer, a Senior Fellow with the IP Innovation Clinic, and a 3L JD Candidate at Osgoode Hall Law School.


For the first time in decades, the US Supreme Court will engage with enablement in patent applications. On November 4th, 2022, the Supreme Court to review the Federal Circuit’s decision in Amgen v Sanofi, against the . Specifically, Amgen seeks to appeal a , in which the court found Amgen’s patents invalid for lack of enablement.

The requirement of enablement in US patent law is codified in , which requires that the specification of a patent application “enable any person skilled in the art…tomake and use” the invention in question. The in Amgen v Sanofi is whether this statutory requirement governs enablement (that the specification teaches those skilled in the art to “make and use” the claimed invention) or whether it must instead enable those skilled in the art “to reach the full scope of the claimed embodiments” without “undue experimentation” (characterized by substantial “time and effort”).

In 2014, Amgen sued Sanofi for infringing on its patents concerning drugs for lowering cholesterol. The genus patents specifically cover that bind to the PCSK9 protein in the body. The patents disclose the amino acid sequences for 26 antibodies that bind to one or more of 15 residues found on the PCSK9 protein. Importantly, the claims at issue are considered , in which the antibodies are not claimed based on their structural components but rather on what they do.

On January 3rd, 2023, many interested parties submitted to offer the Supreme Court their take on the issue to be considered. For instance, in a brief submitted by a group of , it was argued that the Federal Circuit’s standard imposes “an impossible burden” on patentees and that such a decision represents “a categorical shift in thinking away from teaching the PHOSITA and towards a precise delineation of the boundaries of the claim”. The professors further submitted that such a heightened requirement would be especially burdensome for patentees seeking to protect their innovations in the fields of chemistry and the life sciences, as “a chemical genus with any decently large number of species will never be able to satisfy the new enablement standard”.

Other parties in support of Amgen presented some other reasons as well. In their amicus brief, the stated that the court’s reasoning “leaves patent practitioners guessing about how to advise client-inventors regarding the extent of disclosure required”. The , warned of the adverse impact that the new enablement requirement might have on the effectiveness of patent incentives for investors to contribute towards research and development, especially in the case of startups and smaller companies.

Moreover, the has filed a motion for leave to participate in oral argument, claiming a “paramount and unique institutional interest and perspective” – that is, the perspective of individuals and companies working in the chemical, pharmaceutical, and biotechnology fields. CHAL asserts that the Federal Circuit’s enablement standard potentially jeopardizes the benefits of many modern innovations and that adhering to the plain meaning of 35 USC s. 112 should continue to be the prevailing approach.

The Supreme Court’s decision regarding the enablement standard for functional claims could also have wide-reaching implications that spill over into other fields, such as technology and computer-implemented inventions. By too narrowly focusing on the “full scope of the claim” and “undue experimentation” instead of on what those skilled in the art could determine from the specification, it is unclear how broader claims for (such as those that describe the desired result to be achieved by the AI rather than its structural components or any specific software solutions) might fare in the face of such a standard.

Amgen v Sanofi is scheduled to be heard by the US Supreme Court in the upcoming Spring Term.

The post US Supreme Court to Deal with the Patent Enablement Standard appeared first on IPOsgoode.

]]>
NIST Releases their AI Risk Management Framework 1.0 /osgoode/iposgoode/2023/02/10/nist-releases-their-ai-risk-management-framework-1-0/ Fri, 10 Feb 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40589 The post NIST Releases their AI Risk Management Framework 1.0 appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


The (NIST) has been tasked with promoting “U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology.” On January 26, 2023, NIST released their alongside a suggesting ways to use the AI RMF to “incorporate trustworthiness considerations in the design, development, deployment, and use of AI systems”. Both the framework and playbook are intended to help organizations understand and manage the potential risks and benefits of AI. The framework is also meant to ensure that AI systems are developed, deployed, and used in a responsible and trustworthy manner. The framework is intended to be a flexible and adaptable tool that can be applied to a wide range of AI systems, including those used in various industries such as healthcare, finance, and transportation.

NIST describes a trustworthy AI to have a set of characteristics: valid and reliable, safe, secure, and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair – with harmful bias managed.

Valid and reliable: Produces accurate and consistent results. Its performance should be evaluated and validated through ongoing testing and experimentation, with risk management prioritizing the minimization of potential negative impacts.

Safe: Does not cause harm to people or the environment and should be designed, developed, and deployed responsibly with clear information for responsible use of the system

Secure and resilient: Maintains confidentiality, integrity, and availability through protection against common security such as data poisoning, and the exfiltration of other intellectual property through AI system endpoints.

Accountable and transparent: Provides appropriate levels of information to AI actors to allow for transparency and accountability of its decisions and actions.

Explainable and interpretable: representing the underlying AI systems’ operation and the meaning of its output in the context of its designed functional purposes. Explainable and interpretable AI systems offer information that will help end users understand their purposes and potential impact.

Privacy-enhanced: Protects the privacy of individuals and organizations in compliance with relevant laws and regulations.

Fair – with harmful bias managed: NIST has identified three major categories of AI bias to be considered and managed: systemic (broad and ever-present societal bias), computational and statistical (typically due to non-representative samples), and human-cognitive (perceptions of AI system information in deciding or filling in missing information).

AI RMF’s core is organized around four specific functions to help organizations address the risks of AI systems in practice: Govern, Map, Measure, and Manage.

Govern: This includes establishing policies, procedures, and standards for AI systems, key decision-makers, developers, and end-users.

Map: AI RMF is intended to contextualize and frame risks by identifying the system's components, data sources, and external dependencies, as well as to understand how the system is used and by whom.

Measure: AI RMF evaluates the potential risks and benefits of the AI system by assessing the system's vulnerabilities and potential social impacts.

Manage: AI RMF allocates risk resources to mitigate identified risks and continuously monitor the system and its environment by establishing monitoring processes and procedures to detect and respond to incidents, as well as updating controls as needed.

NIST’s AI risk management framework is a voluntary but very important prompt for organizations and teams who design, develop, and deploy AI to think more critically about their responsibilities to the public. Understanding and managing the risks of AI systems will help to enhance trustworthiness, and in turn, cultivate public trust in AI – a critical part in AI adoption and advancement.

The post NIST Releases their AI Risk Management Framework 1.0 appeared first on IPOsgoode.

]]>
Differential Privacy: The Big Tech Solution to Big Data Privacy /osgoode/iposgoode/2022/12/16/differential-privacy-the-big-tech-solution-to-big-data-privacy/ Fri, 16 Dec 2022 17:00:00 +0000 https://www.iposgoode.ca/?p=40382 The post Differential Privacy: The Big Tech Solution to Big Data Privacy appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


The AI revolution has brought about significant concerns about the privacy of big data. Thankfully, over the past decade, big tech has found a solution to this problem: differential privacy, which actors have . The technology is not limited to big tech anymore either; the . Furthermore, the European Union is – indicating that policymakers are on board with differential privacy as a standard means of protecting large, tabulated datasets.

What problem does differential privacy aim to solve?

Differential privacy was created to combat the , which states that “overly accurate answers to too many questions will destroy privacy in a spectacular way.” For instance, in a striking example,

showed that gender, date of birth, and zip code are sufficient to uniquely identify the vast majority of Americans. By linking these attributes in a supposedly anonymized healthcare database to public voter records, she was able to identify the individual health record of the Governor of Massachusetts.

, which at the time contained anonymous movie ratings of 500,000 Netflix subscribers. The attacker compared this to the Internet Movie Database (IMDb) and successfully identified the Netflix records of known users, uncovering information such as their apparent political preferences.

How does one defend against such an attack?

De-anonymization attacks follow the principle that overly accurate answers to too many questions will destroy privacy. Defending a database against too many questions is impractical, thus there must be a method to make answers inaccurate without affecting the data’s utility. Per , this method is achieved by introducing “statistical noise”. The noise () is significant enough to protect the individual’s privacy, but small enough that it will not impact the extracted answers’ accuracy.

Why is this relevant to law?

protects an individual’s information by presenting the impression that their information were not used in the analysis at all, which is more likely to comply with legal requirements for privacy protection. Differential privacy also masks individual contributions to ensure that using an individual’s data will not reveal any personally identifiable information, making it impossible to infer any information specific to an individual.

raised (and voluntarily dismissed) legal arguments against differential privacy by alleging that “the defendants’ decision to produce “manipulated” census data to the states for redistricting would result in the delivery of inaccurate data for geographic regions beyond the state's total population in violation of the Census Act”. As the plaintiff voluntarily dismissed the case, we will need to wait to see if this argument is successful in the future. However, it is obvious that the courts find the addition of statistical noise to violate the data’s integrity, which would be a serious problem for differential privacy.

The post Differential Privacy: The Big Tech Solution to Big Data Privacy appeared first on IPOsgoode.

]]>
Bracing for Impact 2022: AI for the Future of Health – Q&A /osgoode/iposgoode/2022/12/05/bracing-for-impact-2022-ai-for-the-future-of-health-qa/ Mon, 05 Dec 2022 17:00:00 +0000 https://www.iposgoode.ca/?p=40287 The post Bracing for Impact 2022: AI for the Future of Health – Q&A appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


Photo by Buda Photography

On November 9, IP Osgoode, Reichman University and Microsoft hosted the first in-person Bracing for Impact Conference since 2019. The conference focused on “The Future of AI for Society.” While AI is full of exciting possibilities, real-world application and integration are relatively nascent. Implementing AI technology in society requires complex interdisciplinary engagement between engineers, social scientists, application area experts, policymakers, users, and impacted communities. At the conference, an esteemed lineup of speakers across disciplines discussed the forms that interdisciplinary collaboration could take and how AI can help shape a more just, equitable, healthy, and sustainable future.

After their panel discussion, the AI for the Future of Health session held a spirited question & answer period. Attendees and panelists discussed several interesting ideas about advancing AI in healthcare.

Government’s role in providing access to high-quality data

Dr. Gaon argued that government involvement is key for creating necessary infrastructure for facilitating data access, specifically in gathering the relevant solution-implementing groups. Ms. Bawa responded with concern that state-managed data collection would introduce bias, as the segment of people who interact with government poorly represents the general population. Ms. Dykeman added that a robust regulatory framework may be too slow, and that we need guidance in the meantime so as to not impede AI development.

Synthetic data as a workaround for privacy issues

As a potential means for government to provide easy access to data, Dr. Gaon proposed using to facilitate access to data and solve privacy problems, citing a government-led project in Israel. Dr. Singh pointed out that synthetic data is typically used when there is a lack of data, such as for rare diseases. He claimed that using synthetic data when real data is available is a crutch for not figuring out solutions to the problems that were discussed throughout the panel.

Will AI make healthcare less expensive?

Dr. Singh expressed optimism for AI to simultaneously improve costs, time efficiency and care. As software is cheap relative to human labor, AI could achieve all three of these improvements at once. He pointed out that costs still arise from developing and maintaining models and, speaking from his experience trying to allocate salaried time for testing AI solutions at Sick Kids, they are difficult to account for.

Handling IP to appropriately incentivize collaboration

Dr. Singh, wearing both clinician and developer hats, expressed concern for patient data IP potentially being moved inappropriately. He identified a viable solution to this problem: ensuring that hospitals own the IP while the developer owns only the AI delivery mechanism. This would specifically prevent IP from being exported to third parties through investors. Ms. Pio, as a platform advisor from Microsoft, endorsed this solution as ownership of the IP also comes with difficult questions about transparency, bias, and uses. She also reminded us that many of these AI solutions could be used by a multitude of institutions and thus it is prudent to protect the data from being a part of the product being sold.

Transition from research to clinic

Dr. Singh pointed out 3 issues with translating work from Toronto research hospitals to smaller local hospitals: how to get access to the data without violating privacy, how will it be funded, and how will it be regulated. Ms. Dykeman expressed concerns about how much of the development of AI for health is under the research designation, and the challenges that that may introduce down the road.

From the audience, Prof. David Vaver put forth concerns about IP ownership. He proposed a model where patients license their data to the hospital, rather than assigning ownership to the hospital directly. Dr. Singh acknowledged the philosophical correctness of this model but points out the difficulties of implementation – besides technical limitations, he also worried about biases from patients removing their data from the pool.

Conclusion

The panelists shared optimism for AI in the future – and Ms. Pio indicates that many institutions are on board with this optimism as well. There are still many problems that need answering and frameworks to be built – both in terms of building appropriate regulation and the necessary multidisciplinary culture – but the process has started and will only continue forward.

The post Bracing for Impact 2022: AI for the Future of Health – Q&A appeared first on IPOsgoode.

]]>