Data governance Archives - IPOsgoode /osgoode/iposgoode/category/data-governance/ An Authoritive Leader in IP Thu, 23 Oct 2025 15:36:43 +0000 en-CA hourly 1 https://wordpress.org/?v=6.9.4 Identifying the implications of Big Tech and digital personal data for competition policy /osgoode/iposgoode/2025/03/17/identifying-the-implications-of-big-tech-and-digital-personal-data-for-competition-policy/ Mon, 17 Mar 2025 05:09:43 +0000 /osgoode/iposgoode/?p=41068 Our paper demonstrates the growing awareness among policymakers of the important effects of Big Tech and personal data collection on competition and market power.

The post Identifying the implications of Big Tech and digital personal data for competition policy appeared first on IPOsgoode.

]]>

By 'Damola Adediji

Image of author 'Damola Adedeji

and worldwide have continued to express deep concerns about Big Tech firms and their extensive collection of personal digital data, which affects how markets operate and compete. In a I coauthored with Professor Kean Birch of żě˛ĽĘÓĆľ, we dove into these policy materials, using to explore recurring themes in across various regions. Published by the , our work also sheds light on how the collection of personal data is portrayed in the latest review of competition laws, policies, and regulations, and the implications for evolving competition policy

Why Competition Policy Matters

Big Tech firms are powerful political-economic actors within the economy, especially when it comes to the mass collection and use of digital personal data. As , in a data-driven digital economy, they can therefore shape and dominate markets by structurally and strategically undermining competition through their constructed platforms—data-driven ecosystems that appear separate from the market. This capacity gives Big Tech firms structural and techno-economic power over their competitors, making it more important than ever for competition law to step up its game. Through a thematic policy analysis, our research reveals a series of key issues that policymakers around the world are identifying as important structural and techno-economic implications of Big Tech for competition.

Structural and Techno-economic Dimensions of Big Tech’s Market Power

A significant part of Big Tech firms’ market power lies in economies of scale, which can create tough barriers for new competitors to break through. For example, as points out, the high costs needed to start a business can be a genuine hurdle for newcomers, while established companies can handle regulatory costs much more comfortably. Additionally, the costs involved in switching from one provider to another can make users hesitant to change. As highlighted by , the digital economy has sped up the impact of these economies of scale, in part because personal data complicates how we understand market definitions in competition policy. The basic assumptions that guide competition policy often use price theory to define markets and identify anti-competitive behaviour. These competition frameworks therefore struggle to address situations involving seemingly ‘free’ goods (like search engines) or the trade of these free goods and services for personal data. , ).

Meanwhile, the techno-economic side of the power held by these Big Tech firms includes both the strategic and responsive growth of relationships involving technology and political-economics. This growth is aimed at connecting a range of stakeholders, including governments, businesses, users, and academia, with the infrastructures and platforms created by Big Tech.

Structural Implications of Big Tech for Competition

Scholars such as have highlighted the significance of the network effect as a key structural implication of Big Tech for competition policy. These companies have established themselves as intermediaries in building multi-sided market platforms. Network effects result from how the number of users in a network (e.g., social media platforms, search engines) increases the usefulness of the network to its users, thereby raising its attractiveness for new users. Consequently, as the noted in 2020, network effects lead to a self-reinforcing cycle in which users migrate to the fastest-growing network. With this network effect, Big Tech companies are amassing a startling amount of data, providing them with an enormous competitive advantage, creating barriers to rivals entering or thriving in relevant markets, and allowing the incumbent digital platform providers to expand into adjacent markets.

The second structural effect is connected to but distinct from the first: investments made by Big Tech firms mean they can scale up with lower-than-usual costs. As the UK's 2019  put it, ‘Both the scale and the data that the platforms possess on consumers make it hard for other players, including publishers, to compete.’ Economies of scale have provided significant benefits for Big Tech firms as they have grown quickly to dominate their markets. This is clearly becoming a cause for concern amongst policymakers worldwide (as seen in, e.g., , , , OECD 2022). The main negative effect of such economies of scale is the loss of market contestability: there are significant barriers to entry into digital markets because Big Tech incumbents benefit from first-mover technology advantages; there are also significant disparities in market information; and then there are disparities in the capacity to adjust prices because incumbents benefit from greater information (e.g., data collection) and higher processing capacity (e.g., computing infrastructure). 

The third structural issue identified in our paper is the gatekeeping role of these Big Tech companies in our societies and economies. Policymakers have thus noted that a few digital gatekeepers hold the keys to the crucial digital infrastructure that impacts our everyday lives—whether it's staying in touch with friends, finding job opportunities, or accessing information. Gatekeepers can control access to the users and their data, which can hold significant value for other firms wishing to connect with consumers. The fact that this vital digital infrastructure, including personal data, is largely provided by Big Tech, makes it tough for startups and competitors to enter the market.

Techno-economic implications of Big Tech for competition

The first techno-economic issue we identify is the capacity of Big Tech to enter adjacent markets through data collection. As the  pointed out in 2019, ‘The extensive amount of data available to Google and Facebook provide these platforms with a competitive advantage and assist with entry into related markets.’ Data-driven business models enable Big Tech to enter adjacent markets through the modular extension of technical standards and terms and conditions (e.g., APIs, SDKs, plugins).

The second techno-economic issue concerns the spread of market power through the creation of digital ecosystems as ‘walled gardens.’ An ecosystem is more than a platform: it is the configuration of technical devices, applications and software, platforms, users and developers, payment systems, terms and conditions, and other legal rights and claims and standards (see: Autoriteit Consument & Markt, 2019). As explained by the , through this ecosystem, end-users get locked in, reducing the opportunity for competition, even when products and services (e.g., Gmail, Facebook) are notionally ‘free.’

The third techno-economic issue follows the second: Big Tech reinforces its market power by creating ‘enclaves’ in which they govern economic activities. These enclaves are distinct from markets; they sit inside wider markets, , but gatekeepers can also establish the internal ‘rules of the game’ and control market information. Policymakers have highlighted various relevant business strategies and practices—including the setting of defaults, cross-selling, and self-preferencing—that reduce competition within these techno-economic enclaves.

Challenges of digital personal data for competition and competition policy

The mass collection and use of personal data by Big Tech therefore has structural and techno-economic implications for competition policy—implications with which policymakers around the world are now grappling.

A key consideration in these policy materials is the techno-economic dimension of data-driven leverage. Policymakers repeatedly observe that Big Tech enjoys a competitive edge, primarily because of its vast personal data reserves and its ability to limit other companies' access to this valuable information. Although any digital firm can gather personal data, having substantial data holdings boosts innovation potential and offers a notable business advantage. This concern has been underscored by the.

Already concentrated digital markets are likely to concentrate further without concerted action to change competition policy. Our paper demonstrates the growing awareness among policymakers of the important effects of Big Tech and personal data collection on competition and market power. Of course, there's also a looming concern that the winner-takes-all dynamics fuelled by data control could influence the future development of important technologies like artificial intelligence, which significantly depend on large training datasets.

'Damola Adediji is a Visiting Researcher with IP Osgoode and a Doctoral Candidate with the Centre for Law, Technology & Society at the University of Ottawa.

The post Identifying the implications of Big Tech and digital personal data for competition policy appeared first on IPOsgoode.

]]>
Engineers Launch Free Access to AI Ethics and Governance Standards /osgoode/iposgoode/2023/03/15/engineers-launch-free-access-to-ai-ethics-and-governance-standards/ Wed, 15 Mar 2023 16:00:00 +0000 https://www.iposgoode.ca/?p=40679 The post Engineers Launch Free Access to AI Ethics and Governance Standards appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


The Institute of Electrical and Electronics Engineers (IEEE), a professional organization for engineers and technology experts, recently announced the launch of the IEEE GET Program, aimed at providing free access to AI ethics and governance standards. The program is part of IEEE's ongoing efforts to promote responsible AI practices and help organizations develop and implement ethical AI systems.

The opened seven standards for public access:

  1. Age-Appropriate Digital Services
  2. Addressing Ethical Concerns during System Design
  3. Transparency of Autonomous Systems
  4. Data Privacy
  5. Transparent Employer Data Governance
  6. Ethically Driven Robotics and Automation Systems
  7. Assessing the Impact of Autonomous and Intelligent Systems (“A/IS”) on Human Well-Being

Assessing the Impact of A/IS

The most cited of the standards is the , which addressed the growing concern of how autonomous and intelligent systems may affect society. This standard provides a structured approach to evaluating the impact of A/IS on individuals, communities, and society, and helps organizations ensure that their systems are developed and deployed in a manner that supports human well-being. Recommended practices aim to bring an increased awareness about well-being concepts and indicators for A/IS and an increased capacity to monitor, evaluate, and address the well-being impacts of A/IS. Successful application of the standard includes implementing the ability to evaluate the ongoing well-being impact of A/IS on users and stakeholders while continuing to improve the system to safeguard human well-being, resulting in a greater ability to avoid unintentional harm.

The Standard also suggested numerous domains of well-being and accompanying indicators that system designers should be concerned with. These domains pertain to individual well-being —satisfaction with life, affect/feelings, and psychological well-being, social well-being —community, culture, education, economy, health, and work, and regulatory domains —environment, government, and human settlements. The Standard noted that these suggestions are a starting point for selecting indicators and that “indicators should be adapted to fit the circumstances of measuring and gathering data about the well-being impacts for an A/IS on user(s).”

Ethics and Systems Design

The is also frequently cited. This Standard provides a set of guidelines and best practices for organizations that engage in system and software engineering to make value-based ethical system design and investment decisions.

A number of interesting points to consider are included within the Standard Model Process. The standard provides guidance to organizations on establishing key roles in Ethical Value Engineering Project teams. These teams are then tasked with defining how a system is expected to operate from the users’ perspective Concept of Operations — identifying stakeholders and determining the context of use and potential for ethical benefit or harm (Context Exploration). There is also an Ethical Values Elicitation and Prioritization Process, which aims to obtain and rank values and value demonstrators, followed by an Ethical Requirements Definition Process that guides the defining of value-based system requirements to reflect the prioritized core values and their value demonstrators. Finally, the Standard also sets out an Ethical Risk-Based Design Process and a Transparency Management Process, guiding the realization of ethical values and required functionality in designing a system and how to inform stakeholders of the system’s implementation of ethics.

Impact on AI

that these IEEE standards are already being incorporated into AI governance. For instance, the European Union's Artificial Intelligence Act (“EU AI Act”) references many of the components that the IEEE makes available in this package. This will likely continue to be relevant both for regulators and AI developers —“TÜV SÜD sees a strategic advantage for those looking to demonstrate eventual compliance to human-centric regulatory measures or market pressures to leverage these IEEE standards and certifications.” Developing ethical AI systems is a multifaceted problem which requires extensive deliberation by organizations involved with AI systems development. The release of free standards by an authoritative governing body will likely immensely benefit everyone involved.

The post Engineers Launch Free Access to AI Ethics and Governance Standards appeared first on IPOsgoode.

]]>
Anonymous for Now: Demystifying Data De-Identification /osgoode/iposgoode/2023/02/24/anonymous-for-now-demystifying-data-de-identification/ Fri, 24 Feb 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40615 The post Anonymous for Now: Demystifying Data De-Identification appeared first on IPOsgoode.

]]>

Egin Kongoli is a 3L JD Candidate at Osgoode Hall Law School. This article was written as a requirement for Prof. Pina D’Agostino’s IP Innovation Program.


Canada is getting serious about consumer privacy, or so our lawmakers claim.

Parliament has recognized the public’s need for a data framework that ensures proper transparency and accountability.[i] Ottawa’s response is and the proposed Consumer Privacy Protection ActĚý(CPPA), meant to govern the future collection, use, and disclosure of personal information for commercial purposes. However, while the law modernizes elements of the privacy framework, it leaves out exceptions for de-identified data practices that undermine the very trust the legislation is meant to foster. Standing tenuously on technological assumptions, the exception creates a wild-west scenario ripe for harmful data practices.Ěý

Under the CPPA, organizations are not required to obtain user consent to de-identify, a process that modifies data so that “an individual cannot be directly identified.”[ii] The legislation creates an offence for re-identification and, as such, seems aware of the risk.[iii] Nonetheless, further exceptions are made for data anonymization, by which an organization “irreversibly and permanently modif[ies] personal information… to ensure that no individual can be identified from the information, whether directly or indirectly, by any means.”[iv] The CPPA excludes the anonymized data from its purview because, by their definition, there is no reasonable prospect of re-identification.

This logic rests on several problematic assumptions. First, the line which separates de-identified and anonymized data is vague and rarely obvious until re-identification occurs. De-identified data is by its nature not meant to be re-identified, and thus anonymous by the government’s definition. Moreover, the law assumes organizations have the technological capabilities to ensure irreversible and permanent anonymization. While identifiers may be removed, many other seemingly innocuous data points can be used to . Research from Oxford recently found that . One might imagine many disturbing consequences, from identity fraud to the cancer patient whose allegedly-anonymous data is used to change their insurance coverage and rates.

How can the disclosure and use of data be monitored if the law excludes anonymized data from regulation? Privacy enforcement may require individuals to come forward with complaints about the misuse of their data.[v] The system thus asks users to not only be aware of their data anonymization (which they never consented to) and its subsequent disclosure (kept secret from them) but to catch the bad actors re-identifying information the regulators turned a blind eye to. Our framework’s release-and-forget de-identification model thus opens the door to potential misuse of personal information that will remain altogether hidden from the regulator’s or public’s view. Where is the transparency or accountability?

While the anonymized exception answers the growing demands of businesses seeking to use personal data, the current state of de-identification practices does not satisfy the standards of the CPPA. The European GDPR includes data that does not contain direct identifiers but is capable of re-identification, “,” as within the scope of the law. That our lawmakers decided against regulating allegedly-anonymous data begs whether their priorities indeed lay with the needs of the public or of commerce.


[i] Bill C-27,ĚýAn Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts,Ěý1st Sess, 44th Parl, 2022, preamble, para 8.

[ii] Ibid at s 2(1).

[iii] Ibid at s 128.

[iv] Ibid at s 2(1).

[v] Ibid at s 107.

The post Anonymous for Now: Demystifying Data De-Identification appeared first on IPOsgoode.

]]>
Synthetic Data: The Next Solution for Data Privacy? /osgoode/iposgoode/2023/02/23/synthetic-data-the-next-solution-for-data-privacy/ Thu, 23 Feb 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40612 The post Synthetic Data: The Next Solution for Data Privacy? appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


One contentious point from the session was synthetic data’s potential to solve the privacy concerns surrounding the datasets needed to train AI algorithms. In light of its increasing popularity, I will explore the benefits and dangers of this potential solution.

Concept

The data privacy concern that synthetic data aims to address is very similar to the purpose of — protecting anonymized data from being de-identified without reducing data utility. This is distinct from data augmentation, which is the process of adding new data to an existing real-world dataset in order to provide more training data, and could include rotating images or combining two images to create a new one. Data augmentation is typically not useful in the privacy context.

In a , the Office of the Privacy Commissioner of Canada (“OPC”) describes synthetic data as “fake data produced by an algorithm whose goal is to retain the same statistical properties as some real data, but with no one-to-one mapping between records in the synthetic data and the real data.” Synthetic data consists of real-world source data that is put through a generative statistical model, which is evaluated for statistical similarity to the source alongside privacy metrics. Critically, there is no need to remove quasi-identifying data, that is, data vulnerable to de-anonymization. This results in more complete datasets.

Benefits

Synthetic data uses a highly automated process to provide protection from de-identification using a highly automated process. This results in datasets that can be readily shared between AI developers without the dangers of privacy concerns. also points out that there are substantial cost savings. The points to how a synthetic data service company founder estimated that “a single image that could cost $6 from a labeling service can be artificially generated for six cents.” Synthetic data can also be manufactured to reduce bias by deliberately including a wide variety of rare but crucial edge-cases. Nvidia uses machine vision for autonomous vehicles as their example, but I think this concept should translate to improving representation of marginalized and under-represented groups in large datasets in healthcare or facial recognition. Many of the Bracing for Impact panelists shared this concern.

Dangers

The OPC notes in their blog many issues and concerns, particularly regarding de-identification. This is especially true if the synthetic data is not generated with sufficient care and if the “generative model learns the statistical properties of the source data too closely or too exactly”. In other words, if it “overfits” the data, then the synthetic data will simply replicate the source data, making re-identification easy.” Moreover, there is also concern with membership inference, where the fact that some individual data exists is an inherent risk. A also demonstrated that “synthetic data does not provide a better tradeoff between privacy and utility than traditional anonymization techniques” and “the privacy-utility tradeoff of synthetic data publishing is hard to predict.” This indicates that the characterization of synthetic data as a “silver bullet” is likely overselling its capabilities. Ěý

Implementations

Nvidia is using synthetic data in computer vision, but its primary purpose is not privacy — that there are other important functions for the technology. is a leading platform for synthetic data in healthcare and is . It is only beginning: it is predicted that “.”

Conclusion

Synthetic data has the potential to be highly beneficial, as it may be the answer to the many challenges AI developers face in sharing sensitive data. However, like many developments in AI technology, it requires caution and careful implementation to be effective and is potentially dangerous if relied upon haphazardly.

The post Synthetic Data: The Next Solution for Data Privacy? appeared first on IPOsgoode.

]]>
Office Of The Privacy Commissioner Of Canada Publishes Results Of Investigation Into Marriott Data Breach Of 2018 /osgoode/iposgoode/2022/10/27/office-of-the-privacy-commissioner-of-canada-publishes-results-of-investigation-into-marriott-data-breach-of-2018/ Thu, 27 Oct 2022 16:00:39 +0000 https://www.iposgoode.ca/?p=40152 The post Office Of The Privacy Commissioner Of Canada Publishes Results Of Investigation Into Marriott Data Breach Of 2018 appeared first on IPOsgoode.

]]>

M. Imtiaz Karamat is an IP Osgoode Alumnus and Associate Lawyer at Deeth Williams Wall LLP. This article was originally posted onĚý on October 19, 2022.


On September 29, 2022, the Office of the Privacy Commissioner of Canada (the OPC) published the results of itsĚýĚýinto the 2018 data breach involving Marriott International, Inc. (Marriott), finding many of the hotel giant’s privacy controls inadequate and recommending remedial steps to prevent future breaches.

Marriott announced that it experienced a data breach involving the unauthorized access of a Starwood Hotels (Starwood) database on November 30, 2018, as previously reported by the E-TIPSÂŽ NewsletterĚý. Starwood is a separate hospitality company that was acquired by Marriott in 2016, with the unauthorized access reportedly starting before the acquisition (i.e., spanning from 2014 to 2018). The threat actor reportedly obtained access to personal information contained in up to 12.8 million records where the country-of-residence information was listed as Canada. These records included information on guest profiles and contact details, guest reservations, passport details, and encrypted payment card information.

The incident prompted the OPC to launch an investigation into Marriott’s primary operating company for Canadian hotels, Luxury Hotels International of Canada, ULC. During the investigation, the OPC considered the following key issues:

  1. ł§˛š´Úąđ˛ľłÜ˛š°ůťĺ˛ő.ĚýThe OPC reviewed whether there were proper information security safeguards in place to protect personal information. It found several deficiencies in its investigation, including with respect to access controls, anti-virus software, logging and monitoring, and information storage. The OPC found that these deficiencies represented a failure to implement proper protection measures and were a contravention of Principle 4.7 of theĚýPersonal Information Protection and Electronic Documents ActĚý(PIPEDA).
  2. Accountability.ĚýFollowing the acquisition of Starwood, Marriott was accountable for implementing policies to properly protect personal information. The OPC found that despite undergoing a post-acquisition assessment of Starwood’s systems and making certain improvements, Marriott failed to adequately perform ongoing security assessments in contravention of Principle 4.1.4 of PIPEDA.
  3. Information Retention.ĚýThe OPC determined whether the compromised information was held for an appropriate period of time and found that certain personal information was retained for longer periods than necessary in violation of Principle 4.5 of PIPEDA.
  4. Notification and Mitigation.ĚýGiven that the OPC considered the compromised information as presenting an ongoing risk of harm for those affected, it reviewed whether appropriate notification and mitigation measures were used in response to the breach. Marriott conducted both direct notification for those individuals in which it had a valid email address and indirect notification for the remaining individuals (e.g. issuing press releases and providing breach information on a dedicated website). Additionally, Marriott implemented various mitigation measures, such as offering one year of free web monitoring to affected individuals, establishing a dedicated call centre, implementing a process for individuals to verify whether a passport number was involved in the breach, and notifying credit card networks of the incident. Although the OPC would have preferred the web monitoring protection to be for a longer time period, it ultimately found the above notification and mitigation measures to be adequate.

In concluding its report, the OPC acknowledged the remedial steps carried out by Marriott, such as the decommissioning of the Starwood database in December 2018. It also recommended implementing further action to ensure compliance, including having Marriott (i) retain an independent assessor to review any enhancements it has made to its systems; and (ii) review its organizational and governance measures as it relates to selected privacy practices. With both recommendations, the OPC requested that Marriott submit reports detailing their findings and proposed timelines for addressing any action items arising from the reviews.

The post Office Of The Privacy Commissioner Of Canada Publishes Results Of Investigation Into Marriott Data Breach Of 2018 appeared first on IPOsgoode.

]]>
International Data Protection And Privacy Regulators Release Guidance On Credential Stuffing Attacks /osgoode/iposgoode/2022/08/08/international-data-protection-and-privacy-regulators-release-guidance-on-credential-stuffing-attacks/ Mon, 08 Aug 2022 16:00:00 +0000 https://www.iposgoode.ca/?p=39875 The post International Data Protection And Privacy Regulators Release Guidance On Credential Stuffing Attacks appeared first on IPOsgoode.

]]>

M. Imtiaz Karamat is an IP Osgoode Alumnus and Associate Lawyer at Deeth Williams Wall LLP. This article was originally posted onĚý on July 13, 2022.


On June 27, 2022, the Office of the Privacy Commissioner of Canada, along with fellow members of the Global Privacy Assembly’s International Enforcement Cooperation Working Group (IEWG), released guidance documents to helpĚýĚý˛š˛ÔťĺĚýĚýprotect against credential stuffing attacks.

Credential stuffing attacks exploit the tendency of users to reuse their usernames and passwords across multiple platforms. Threat actors use username and password information that was leaked in past data breaches to access other online accounts belonging to the users. These attacks may result in financial or reputational harm for individuals, and cyberbreaches for organizations despite a robust cyber security infrastructure. In its guidance, the IEWG states that hundreds of millions of credential stuffing attacks occur each day and credential stuffing has become a global threat to personal data.

To assist individuals in defending against credential stuffing attacks, the IEWG advises, among other things, that users should:

  • not reuse their passwords across multiple accounts;
  • consider implementing multi-factor authentication (MFA) where possible;
  • immediately change the passwords for any compromised accounts and for any other accounts protected by the same or similar passwords; and
  • routinely check account information for unusual activity or unauthorized transactions.

For organizations, the IEWG discusses (i) implementing password systems and policies that fortify the creation and management process for account passwords; (ii) making MFA an essential security measure in one’s organization; and (iii) using alternatives to traditional accounts setups, such as guest accounts, single sign-on systems, and secondary passwords.

Although these guidelines may not represent legal obligations across all IEWG member jurisdictions, the IEWG intends to raise awareness of the threat of credential stuffing and assist the general public, along with private organizations, in fortifying their personal information practices.

The post International Data Protection And Privacy Regulators Release Guidance On Credential Stuffing Attacks appeared first on IPOsgoode.

]]>
IP, Data, and Digital Platform Governance: Notes from the 5th Annual IP Data & Research Conference /osgoode/iposgoode/2022/03/30/ip-data-and-digital-platform-governance-notes-from-the-5th-annual-ip-data-research-conference/ Wed, 30 Mar 2022 16:00:31 +0000 https://www.iposgoode.ca/?p=39361 The post IP, Data, and Digital Platform Governance: Notes from the 5th Annual IP Data & Research Conference appeared first on IPOsgoode.

]]>

Jasmine Yu is anĚýIPilogueĚýWriter and aĚý1L JD Candidate at the University of Toronto.

This article is part of a series covering the 5th Annual IP Data & Research Conference, hosted by the Canadian Intellectual Property Office and the Centre for International Governance Innovation.

The sixth session of the 5th Annual IP Data & Research Conference, hosted by the Canadian Intellectual Property Office (“CIPO”) and the Centre for International Governance Innovation (“CIGI”), centered around IP, data, and digital platform governance. The two-part session was moderated by Michael Falk (director of the Office of the Chief Economist at IP Australia). It kicked off with a presentation on standards used in data ecosystems, followed by a panel discussion on the value of data and the processes involved in building collaborative ecosystems.

Falk’s opening remarks set the stage for this truly enlightening session. Over the past couple of years, our reliance on digital platforms has greatly increased, transforming how we do business and conduct our lives. This revolution has made data ecosystems and international standards all the more important.

Part I: Presentation

The first presentation was delivered by Sean Martineau (acting director and research economist at the CIPO) and Keith Jansa (executive director of the CIO Strategy Council).

They first highlighted several trends in intellectual property:

  • Intangible assets’ growing importance
  • Increased IP filings within the past two decades, both internationally and in Canada
  • Growth in standard essential patents (“SEPs”) across the world, by country, and by individual standard setting organizations (“SSOs”)

Moving into a discussion on standards, they noted that standards establish accepted practices, technical requirements and, at times, modernize public policy. It is fascinating how standards are so integrated with our daily lives. The device you are reading this article on interacts with multiple technologies, implicating hundreds of SEPs! Some organization collects profits from licensing, and others to write cheques as cost to market each time that you use your device!

Jansa emphasized the importance of recognizing standards’ significance, the levers and process of standard development, and the impact that standards may have on advancing innovation. Ěý

Part II: Panel

ĚýThe subsequent three-person panel consisted of Evegueni Loukipoudis (strategic advisor at Digital Technology Supercluster), Peter Cowan (co-founder, director, and CEO advisor at Innovation Asset Collective, and principle consultant at Northworks IP), and Karima Bawa (strategic advisor on IP at Digital Technology Supercluster and senior fellow at the CIGI).

Loukipoudis kickstarted the panel with a discussion on the value of data, which he noted depends at least partly on who the user is, what they know about the data, and what they can do with it.

Cowan, on the other hand, discussed the importance of institutions having data strategy and proper infrastructure in place to collect, store, process, and use data properly. He also expressed concern for the inadequate literacy on data strategy in Canada.

Bawa focused largely on the legal aspect of data use. Data has become increasingly commercialized, with more entities entering into data-sharing arrangements to yield value out of data. She advised (informally!) parties in data-sharing arrangements to be aware of considerations such as the rights that stakeholders have over the data, regulatory compliance, management of cyber-attacks with limiting liability clauses, and how the data is accessed, stored, and guarded. Bawa also noted that it is wise to be circumspect with who you share data with, and how you share it.

ĚýConclusion

As the space-time continuum continues to shrink in our rapidly evolving world, data, standards, and privacy become even more important. The sixth session of the 5th Annual IP Data & Research Conference rounded off a day of excellent presentations and discussions.

For start-ups, aspiring IP specialists, or those simply interested in IP strategy, check out this by CIGI: the CIGI Massive Open Online Course (MOOC) on Foundations of IP Strategy, co-created by Karima Bawa.

If you missed the conference, be sure to take a look at the materials shared by the presenters (also available in French).

The post IP, Data, and Digital Platform Governance: Notes from the 5th Annual IP Data & Research Conference appeared first on IPOsgoode.

]]>
OSFI Launches Consultation On Draft Technology And Cyber Risk Management Guideline /osgoode/iposgoode/2021/11/26/osfi-launches-consultation-on-draft-technology-and-cyber-risk-management-guideline/ Fri, 26 Nov 2021 17:00:00 +0000 https://www.iposgoode.ca/?p=38698 The post OSFI Launches Consultation On Draft Technology And Cyber Risk Management Guideline appeared first on IPOsgoode.

]]>
M. Imtiaz Karamat is an IP Osgoode Alumnus and Associate Lawyer at Deeth Williams Wall LLP. This article was originally posted on .

On November 9, 2021, the Office of the Superintendent of Financial Institutions (OSFI)ĚýĚýa public consultation on Draft Guideline B‑13: Technology and Cyber Risk Management (the Guideline). It applies to federally regulated financial institutions (FRFIs) and addresses OSFI’s expectations in relation to technology and cyber risks.

The Guideline is organized into five domains, with each domain describing OSFI’s desired outcome for FRFIs in a certain aspect of technology and cyber risk management:

  1. Governance and Risk Management:Ěýthe FRFI has a clear framework and comprehensive strategy to govern technology and cyber risks.
  2. Technology Operations:Ěýthere isĚýa resilient and scalable technology environment in place that is kept up-to-date by robust operating processes.
  3. Cyber Security:Ěýthe FRFI is able to maintain the confidentiality, integrity, and availability of technology assets.
  4. Third-Party Provider Technology and Cyber Risk:Ěýthird-party providers deliver reliable and secure technology and cyber operations to the FRFI.
  5. Technology Resilience:Ěýthe FRFI has proper disaster recovery capabilities that allows the delivery of technology services through operational disruption.

In its announcement of the consultation, OSFI commented on the importance of stakeholder engagement to strike the appropriate balance between its prudential objectives, while still allowing financial institutions to compete. Accordingly, OSFI welcomes public feedback on the Guideline and is especially interested in feedback that addresses the clarity and application of their outlined expectations, the balance between principles and prescriptiveness in these expectations, and other suggestions that relate to OSFI’s mandate.

The consultation is open until February 9, 2022 and comments can be submitted atĚýTech.Cyber@osfi-bsif.gc.ca.

The post OSFI Launches Consultation On Draft Technology And Cyber Risk Management Guideline appeared first on IPOsgoode.

]]>
EU Penalizes Amazon $887 million for GDPR Infringement /osgoode/iposgoode/2021/08/24/eu-penalizes-amazon-887-million-for-gdpr-infringement/ Tue, 24 Aug 2021 16:00:18 +0000 https://www.iposgoode.ca/?p=38097 The post EU Penalizes Amazon $887 million for GDPR Infringement appeared first on IPOsgoode.

]]>
Statute of boxes standing on a tree

Photo by ()

Tiffany WangTiffany Wang is an IPilogue Writer, IP Innovation Clinic Fellow, and a 2L JD Candidate at Osgoode Hall Law School.

Ěý

In July, the European Union delivered an unprecedented fine against Amazon—a record $887 million USD. Luxembourg’s National Commission for Data Protection (CNPD) penalized Amazon for their . The $887 million fine is almost triple the amount of General Data Protection Regulation .

. La Quadrature du Net claims to represent the .

Amazon refuses to remain idle. The multinational firm has already declared it will initiate the to refute this penalty. Amazon voiced that there has been and continues to promise that . The irony here, however, rests in the reality that

The EU’s penalty against Amazon . Legislation still has teeth despite Luxembourg’s historically friendly stance toward Amazon .

The unprecedented fine also underscores the EU’s of Amazon. Amazon has . Even though Amazon claims that collecting data helps to foster a better online retail environment, regulators and lawmakers. In fact, growing suspicion clouds the correlation between data and Amazon’s . The 2018 privacy investigation only fuels the . .

Amazon’s slogan is “Work hard. Have fun. Make history”. Indeed, Amazon has made history with its $887 million penalty. But is this the “history” that Jeff Bezos envisioned?

The post EU Penalizes Amazon $887 million for GDPR Infringement appeared first on IPOsgoode.

]]>
Feds promise to create data commissioner, more funding for IP as part of fiscal plan /osgoode/iposgoode/2021/05/05/feds-promise-to-create-data-commissioner-more-funding-for-ip-as-part-of-fiscal-plan/ Wed, 05 May 2021 16:00:23 +0000 https://www.iposgoode.ca/?p=37292 The post Feds promise to create data commissioner, more funding for IP as part of fiscal plan appeared first on IPOsgoode.

]]>
This article was originally published by The Lawyer’s Daily (), part of LexisNexis Canada Inc, on May 3, 2021.

Although the Trudeau government focused much of its attention in the recent federal budget with the continuing fight against the COVID-19 pandemic, buried deep in the document were promises to tackle data and intellectual property (IP) issues which it says will be key as a digital economy becomes the norm for many.

With more and more of people’s lives happening online — and the pandemic forcing individuals to move their workstations from gleaming towers to cozy home offices — the federal government is saying a digital economy that serves and protects Canadian business is vital for long-term growth, but Canadians must be able to trust that their data is protected and being used responsibly.

To that end, the federal Liberals are promising to create an office of data commissioner aimed at informing government and business approaches to data-driven issues to help protect people’s personal data and to encourage innovation in the digital marketplace.

Pina D’Agostino, founder and director of the Intellectual Property Law and Technology Program at Osgoode Hall Law School (IP Osgoode), said appointing a data commissioner is “basically signalling the importance of data as a new currency.”

“Ownership and governance issues of data are not privacy issues, so it would be beyond the mandate of a privacy commissioner,” she said. “The signal is that it’s not just about privacy that matters, there are other social implications — what they speak about in artificial intelligence is that data can tend to privilege certain demographic groups in society, so we want to ensure that doesn’t happen and someone like a data commissioner would be mindful of that.”

Marc Yu, a privacy and data management lawyer with Edmonton’s Field Law, said the concept of a data commissioner is “interesting” for Canada.

“It is a fairly short description in the budget as to what the data commissioner is intended to do, so it is likely there will be further details once the commissioner’s office is developed in the future and becomes operational,” he said. “I would think one of the data commissioner’s roles would be to streamline information sharing amongst the public sector, so amongst the federal agencies and federal departments we would have a clearer process in terms of how data might be shared between these different agencies and departments to help them further their objectives and functions in this digital environment, while also maintaining the personal privacy of individuals to whom this information belongs.”

In addition to creating a data commissioner’s office the budget also contains several direct investments in intellectual property, building on the national intellectual property strategy announced in the government’s 2018 fiscal plan. Ottawa is promising to establish ElevateIP, a program to help accelerators and incubators provide startups with access to expert intellectual property services, and allow companies to expense the cost of some investments in digital and IP assets.

These initiatives would be complemented by a strategic intellectual property program review, which would be a broad assessment of intellectual property provisions in Canada’s innovation and science programming, from basic research to near-commercial projects.

D’Agostino said the funding is welcome because the costs associated with many intellectual property matters, such as filing patents, are very high. And she noted a setting up a review “really speaks to the issues which we have seen in the IP system.”

“It is not useful to have siloed approach — we have a Patent Act, a Copyright Act and a Trademark Act that don’t speak to one another and at the same time develop polices in isolation,” she said. “We need to look at all of them and how they benefit innovation and science generally because they all work together, and ensuring there is no siloing and how the laws and the programs we have can really benefit research commercialization and ultimately innovation for Canada.”

More information on the federal budget can be found .

If you have any information, story ideas or news tips for please contact Ian Burns at Ian.Burns@lexisnexis.ca or call 905-415-5906.

Ian Burns is a Digital Reporter for The Lawyer's Daily.

The post Feds promise to create data commissioner, more funding for IP as part of fiscal plan appeared first on IPOsgoode.

]]>