data governance Archives - IPOsgoode /osgoode/iposgoode/tag/data-governance/ An Authoritive Leader in IP Fri, 24 Feb 2023 17:00:00 +0000 en-CA hourly 1 https://wordpress.org/?v=6.9.4 Anonymous for Now: Demystifying Data De-Identification /osgoode/iposgoode/2023/02/24/anonymous-for-now-demystifying-data-de-identification/ Fri, 24 Feb 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40615 The post Anonymous for Now: Demystifying Data De-Identification appeared first on IPOsgoode.

]]>

Egin Kongoli is a 3L JD Candidate at Osgoode Hall Law School. This article was written as a requirement for Prof. Pina D’Agostino’s IP Innovation Program.


Canada is getting serious about consumer privacy, or so our lawmakers claim.

Parliament has recognized the public’s need for a data framework that ensures proper transparency and accountability.[i] Ottawa’s response is and the proposed Consumer Privacy Protection Act(CPPA), meant to govern the future collection, use, and disclosure of personal information for commercial purposes. However, while the law modernizes elements of the privacy framework, it leaves out exceptions for de-identified data practices that undermine the very trust the legislation is meant to foster. Standing tenuously on technological assumptions, the exception creates a wild-west scenario ripe for harmful data practices.

Under the CPPA, organizations are not required to obtain user consent to de-identify, a process that modifies data so that “an individual cannot be directly identified.”[ii] The legislation creates an offence for re-identification and, as such, seems aware of the risk.[iii] Nonetheless, further exceptions are made for data anonymization, by which an organization “irreversibly and permanently modif[ies] personal information… to ensure that no individual can be identified from the information, whether directly or indirectly, by any means.”[iv] The CPPA excludes the anonymized data from its purview because, by their definition, there is no reasonable prospect of re-identification.

This logic rests on several problematic assumptions. First, the line which separates de-identified and anonymized data is vague and rarely obvious until re-identification occurs. De-identified data is by its nature not meant to be re-identified, and thus anonymous by the government’s definition. Moreover, the law assumes organizations have the technological capabilities to ensure irreversible and permanent anonymization. While identifiers may be removed, many other seemingly innocuous data points can be used to . Research from Oxford recently found that . One might imagine many disturbing consequences, from identity fraud to the cancer patient whose allegedly-anonymous data is used to change their insurance coverage and rates.

How can the disclosure and use of data be monitored if the law excludes anonymized data from regulation? Privacy enforcement may require individuals to come forward with complaints about the misuse of their data.[v] The system thus asks users to not only be aware of their data anonymization (which they never consented to) and its subsequent disclosure (kept secret from them) but to catch the bad actors re-identifying information the regulators turned a blind eye to. Our framework’s release-and-forget de-identification model thus opens the door to potential misuse of personal information that will remain altogether hidden from the regulator’s or public’s view. Where is the transparency or accountability?

While the anonymized exception answers the growing demands of businesses seeking to use personal data, the current state of de-identification practices does not satisfy the standards of the CPPA. The European GDPR includes data that does not contain direct identifiers but is capable of re-identification, “,” as within the scope of the law. That our lawmakers decided against regulating allegedly-anonymous data begs whether their priorities indeed lay with the needs of the public or of commerce.


[i] Bill C-27,An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts,1st Sess, 44th Parl, 2022, preamble, para 8.

[ii] Ibid at s 2(1).

[iii] Ibid at s 128.

[iv] Ibid at s 2(1).

[v] Ibid at s 107.

The post Anonymous for Now: Demystifying Data De-Identification appeared first on IPOsgoode.

]]>
IP, Data, and Digital Platform Governance: Notes from the 5th Annual IP Data & Research Conference /osgoode/iposgoode/2022/03/30/ip-data-and-digital-platform-governance-notes-from-the-5th-annual-ip-data-research-conference/ Wed, 30 Mar 2022 16:00:31 +0000 https://www.iposgoode.ca/?p=39361 The post IP, Data, and Digital Platform Governance: Notes from the 5th Annual IP Data & Research Conference appeared first on IPOsgoode.

]]>

Jasmine Yu is anIPilogueWriter and a1L JD Candidate at the University of Toronto.

This article is part of a series covering the 5th Annual IP Data & Research Conference, hosted by the Canadian Intellectual Property Office and the Centre for International Governance Innovation.

The sixth session of the 5th Annual IP Data & Research Conference, hosted by the Canadian Intellectual Property Office (“CIPO”) and the Centre for International Governance Innovation (“CIGI”), centered around IP, data, and digital platform governance. The two-part session was moderated by Michael Falk (director of the Office of the Chief Economist at IP Australia). It kicked off with a presentation on standards used in data ecosystems, followed by a panel discussion on the value of data and the processes involved in building collaborative ecosystems.

Falk’s opening remarks set the stage for this truly enlightening session. Over the past couple of years, our reliance on digital platforms has greatly increased, transforming how we do business and conduct our lives. This revolution has made data ecosystems and international standards all the more important.

Part I: Presentation

The first presentation was delivered by Sean Martineau (acting director and research economist at the CIPO) and Keith Jansa (executive director of the CIO Strategy Council).

They first highlighted several trends in intellectual property:

  • Intangible assets’ growing importance
  • Increased IP filings within the past two decades, both internationally and in Canada
  • Growth in standard essential patents (“SEPs”) across the world, by country, and by individual standard setting organizations (“SSOs”)

Moving into a discussion on standards, they noted that standards establish accepted practices, technical requirements and, at times, modernize public policy. It is fascinating how standards are so integrated with our daily lives. The device you are reading this article on interacts with multiple technologies, implicating hundreds of SEPs! Some organization collects profits from licensing, and others to write cheques as cost to market each time that you use your device!

Jansa emphasized the importance of recognizing standards’ significance, the levers and process of standard development, and the impact that standards may have on advancing innovation.

Part II: Panel

The subsequent three-person panel consisted of Evegueni Loukipoudis (strategic advisor at Digital Technology Supercluster), Peter Cowan (co-founder, director, and CEO advisor at Innovation Asset Collective, and principle consultant at Northworks IP), and Karima Bawa (strategic advisor on IP at Digital Technology Supercluster and senior fellow at the CIGI).

Loukipoudis kickstarted the panel with a discussion on the value of data, which he noted depends at least partly on who the user is, what they know about the data, and what they can do with it.

Cowan, on the other hand, discussed the importance of institutions having data strategy and proper infrastructure in place to collect, store, process, and use data properly. He also expressed concern for the inadequate literacy on data strategy in Canada.

Bawa focused largely on the legal aspect of data use. Data has become increasingly commercialized, with more entities entering into data-sharing arrangements to yield value out of data. She advised (informally!) parties in data-sharing arrangements to be aware of considerations such as the rights that stakeholders have over the data, regulatory compliance, management of cyber-attacks with limiting liability clauses, and how the data is accessed, stored, and guarded. Bawa also noted that it is wise to be circumspect with who you share data with, and how you share it.

Conclusion

As the space-time continuum continues to shrink in our rapidly evolving world, data, standards, and privacy become even more important. The sixth session of the 5th Annual IP Data & Research Conference rounded off a day of excellent presentations and discussions.

For start-ups, aspiring IP specialists, or those simply interested in IP strategy, check out this by CIGI: the CIGI Massive Open Online Course (MOOC) on Foundations of IP Strategy, co-created by Karima Bawa.

If you missed the conference, be sure to take a look at the materials shared by the presenters (also available in French).

The post IP, Data, and Digital Platform Governance: Notes from the 5th Annual IP Data & Research Conference appeared first on IPOsgoode.

]]>
ICYMI: Highlights from Part 2 of IP Osgoode's Bracing for Impact AI Conference Series /osgoode/iposgoode/2019/04/08/icymi-highlights-from-part-2-of-ip-osgoodes-bracing-for-impact-ai-conference-series/ Mon, 08 Apr 2019 19:41:28 +0000 https://www.iposgoode.ca/?p=3332   On March 21, 2019, we had the pleasure of attending IP Ogsoode'sBracing for Impact conference series held at the Toronto Reference Library. This year’s conference theme was data governance, with a focus on novel legal issues with respect to two key sectors - health/science and smart cities. Professor D’Agostino’s opening remarks touched on the […]

The post ICYMI: Highlights from Part 2 of IP Osgoode's Bracing for Impact AI Conference Series appeared first on IPOsgoode.

]]>
 

On March 21, 2019, we had the pleasure of attending IP Ogsoode's conference series held at the Toronto Reference Library. This year’s conference theme was data governance, with a focus on novel legal issues with respect to two key sectors - health/science and smart cities. Professor D’Agostino’s opening remarks touched on the legal and ethical dimensions of data governance, given the large amount of activity over the last year in the AI space. The day was broken down into five panel discussions, with a luncheon keynote by Professor Kang Lee from the University of Toronto.

Why is Data so Important to the Development of AI?

The first discussion focused on the impact of data quantity and quality which determine AI capability. Jonathan Penney(Assistant Professor of Law; Director, Law & Technology Institute, Schulich School of Law, Dalhousie University) provided three instances where data was more important than the AI systems themselves: in advancing AI, in addressing bias and discriminatory practices in existing systems, and in AI accountability and transparency to understand decision making. Notably, Alexander Wissner-Gross examined the last 30 years of AI development, and found that the recent advances were largely due to the availability of large data sets. In 2011, IBM Watson Jeopardy Champion used data from 8.6 million Wikipedia articles and in 2014, GoogleNet object classification used 1.5 million images on ImageNet to train its AI system. Carole Piovesan(Partner & Co-Founder, INQ Data Law) echoed the importance of data to AI systems, and touched upon the two growing debates regarding data exchange and privacy. The crux of the privacy debate focuses on the trade-off between privacy as a quasi-constitutional value versus the importance of innovation and the need for data to produce public goods. She called upon the audience to think about what a fair exchange in today’s data marketplace means to them. Finally, the shifting policy led by the EU's adoption of the General Data Protection Regulation (GDPR) was discussed. In Canada, current regulations still focus mainly on consent. Both speakers acknowledged that we should be moving towards establishing standards as very few people actually enforce their rights.

Intellectual Property at a Crossroad

Three key ideas came out of the second panel discussion, namely, the issue of whether AI systems and programs are eligible for copyright or patent protection under current statutes, the international implications and developments, and the importance of AI in collaboration. Dave Green (Assistant General Counsel, IP Law & Policy, Microsoft)shared Microsoft’s perspective on AI’s role in enabling machine intelligence to simulate or augment elements of human thinking. Two copyright issues that come into play with AI are defining “Works of Authorship” and identifying whether specific types of “copying” are enough to create liability, both of which have been complicated by the use of computer programs and factual materials. Internationally, the requirement that humans be the authors of creative works is found in the constitutions of US, Hong Kong, India, New Zealand, in the UK and other countries. As technology and AI advances, do we want to continue to insist upon the requirement that authors of creative works be humans? If we don’t, what does that say about downstream issues such as intent, infringement, and liability? In regards to international approaches to data mining - should there be a fair dealing exception, particularly when you look at addressing the issue of bias? The WIPO recently established a new division that focuses purely on AI, which will be especially important given the spike in AI patenting activity that has occurred over the past several years. Shlomit Yanisky-Ravid (Faculty Member and Lecturer, ONO Academic Law School and Fordham Law School) challenged the audience with the Turing Test, proving that it is often difficult to identify between works created by AI or a human being.

Catherine Lacavera (Director of IP, Litigation and Employment, Google Inc.) shared her belief that the existing patent and copyright systems are robust enough to deal with changes we are seeing in AI, though the regulatory and social impact front of AI are changing at a fast pace. In this regard, it is important to balance social benefit with the potential for abuse and the importance of building diverse data sets and incorporating privacy and affordability in our design principles going forward. Maya Medeiros (Partner, Norton Rose Fullbright Canada LLP) stressed the importance of using IP rights to facilitate multi-party collaborations to protect AI innovation and incentivize collaborative behaviour. Furthermore, she raised the issues of fair dealing in data mining and the use of different types of IP rights to protect different aspects of works being generated.

Resolving Data Barriers

The third panel focused on the tools required to access data and facilitates the development of AI.

Momin Malik (Data Science Postdoctoral Fellow, Berkman Klein Center for Internet & Society at Harvard University) discussed how AI is beneficial in certain contexts, such as for predicting behaviour. However, the data that is valuable for AI is often limited by access to copyright protected materials. For example, in the development of Google's information retrieval system, the company faced many copyright issues. However, they were able to successfully navigate the copyright challenges by entering into agreements with publishers to create , and ultimately make data more accessible to the public.

Paul Gagnon (Legal Counsel, Element AI)contemplated whether sui generis legislation is the way forward. Europe, for example, relied on the existing concept of fair dealing as an exemption for data mining. However, this exemption is limited as it only applies to researchers and not commercial institutions. Having open data and accessible data are two distinct concepts. Accessibility does not necessitate that you can use the data. Uses may be restricted by specific purposes, such as “for academic use only”.

Dave Green concluded the panel discussion by contemplating whether copyright could “make nice with AI”. AI does not copy for the purpose of replicating the work or infringing on the underlying value of expression, but rather it can unlock different insights than “Works of Authorship”. This is the difference between the use of a photo as a work, for aesthetic purposes or factual reporting, and the use of a photo as data. Green looked at examples of how different jurisdictions are making copyright safe for AI and machine learning, such as the fair use exception in Israel. Democratizing the right to learn and research is essential to this field and it remains to be seen how other jurisdictions may embrace this fact.

Luncheon Keynote: Affective Artificial Intelligence & Law: Opportunities, Applications, and Challenges

Kang Lee (Professor and Tier 1 CRC Chair in developmental neuroscience, University of Toronto) amazed the audience with ashowcase of his connected health venture, . Dr. Lee's interdisciplinary invention brings together research from neuroscience, psychology, physiology, and deep learning to produce AI that can detect, measure, and analyze human affect through physiological cues. The ™ mobile application turns smart devices into a personal health tool that individuals can use to manage stress and get updates on their personal health. It uses (TOI™), which uses video to recognize facial blood flow imaging from the human face. This image is then processed by ™, which is the AI that can detect and measure different human emotions. Dr. Lee’s work is significant as it demonstrates how AI can improve the health and science fields to give patients more control over their health care.

Big Data, Health & Science

The fourth panel discussion focused on the unique AI and data issues in the health and science sectors. James Elder (Professor, Lassonde School of Engineering; York Research Chair in Human and Computer Vision, 첥Ƶ) discussed potential uses for converting raw data into 2D images and subsequently converting these images into 3D models. 3D modelling with real data has applications for road and pedestrian traffic. The technology may also address some privacy concerns since his 3D virtualization technology turns the 2D images into avatars, which has the effect of anonymizing visual appearances. There are many opportunities for visual AI to help improve daily processes.

Victor Garcia (Managing Director & CEO, ABCLive Corporation)discussed how big data can transform the health sciences. Data helps to improve the way companies in this sector do business. Clinical, insurance claims, pharmaceutical, research and development, patient behaviour, and lifestyle data can all contribute a plethora of knowledge to the health sector. These can improve process efficiencies and make hospital resources available sooner to new patients. For example, Humber River Hospital used data analytics to improve their health care services and increase efficiency by 40%.

Ian Stedman (PhD Candidate, Osgoode Hall Law School; Fellow in AI Law & Ethics at SickKids’ Centre for Computational Medicine)highlightedSickKid's move to integrate AI into their practice with the development of a task force to examine how data governance and policies, infrastructure, AI solutions, and ethics interacted before implementing new AI tools. Stedman stressed that data source and quality are essential because in the health sector, it is essential to ask all the right questions to make accurate conclusions and diagnoses. With clinical studies, it is much easier to access data since there is a research plan, which includes the research purpose, the targeted population, and the results the researcher hopes to observe. However, with the data that AI relies on, in order to unlock its potential value, researchers study data to find patterns. Therefore, it is difficult to ask for secondary use disclosure before the research is conducted when the researcher may not know what they are looking for. The takeaway is that regardless of the industry, harmonization and collaboration are key. There is opportunity to put data together from different sources to discover the potential of new clinical decision making tools.

What Makes a Smart City?

In Toronto, and internationally, data privacy issues have come to the forefront of public discussion due to the development of smart cities. Given the proposed Sidewalk Toronto, the collection, storage, and use of data has led to a heated debate about data governance. The Mayor of Barrie, Jeff Lehman, discussed the project which calls upon start-ups and small organizations to develop new technologies that use data to address civic challenges. Instead of putting out a traditional municipal tender, the cities released a Request for Solutions and invited responses from the public to provide a cohesive opportunity for collaboration. In response to the issue of data localization in the Sidewalk Toronto debate, Mayor Lehman believes that consent is possible, but that the data must reside in Canada to ensure that the national government can set the rules around the data being collected. Finally, Mayor Lehman advocated for the use of Privacy Impact Assessments to evaluate the impact of new technology on privacy.

Neetika Sathe (Vice President, Advanced Planning, Alectra Inc.)advocated for the importance of data policies regarding smart cities to be worked on at every level of government to develop a national data strategy. Furthermore, Sathe introduced the audience to some of Alectra’s projects and the data collection challenges associated with each. These projects included (end-to-end integrated EV workplace charging pilot project), the (which collects smart meter data), and the (which uses a private blockchain network that limits access to data).

Natasha Tusikov (Assistant Professor, Dept. Social Science, 첥Ƶ; The City Institute at 첥Ƶ)challenged the audience to think about who should own, control and govern data related to smart cities. Prof. Tusikov discussed the issue of conflicting public and private authority, raising her concern that Waterfront Toronto is not an expert in IP, but in land development. As an example of regulating the governance of smart cities, Barcelona developed a manifesto outlining the importance of technological sovereignty and maintaining digital rights.

To close the panel discussion, John Weigelt (National Technology Officer, Microsoft Canada Inc.) spoke about the importance of solidifying the participants involved in developing a smart city and the business model we want to create. If employed correctly, AI will solve societal challenges. Municipalities and companies that can thoughtfully clarify their approach to AI first will prosper the most from its benefits.

The conference encouraged thought-provoking discussion about data governance and its implications on health and smart cities. We hope that the discussion about data collection and what we value as society continues beyond this event. Thoughtful and inclusive discussion will allow us to collectively brace for impact as AI technology continues to advance.

 

Written by Lauren Chan and Summer Lewis. Lauren Chan is an IPilogue editor and a business student at the University of Guelph, and Summer Lewis is an IPilogue editor and a JD candidate at Osgoode Hall Law School.

 

The post ICYMI: Highlights from Part 2 of IP Osgoode's Bracing for Impact AI Conference Series appeared first on IPOsgoode.

]]>
Tech, Tykes and Teens (or: How I Learned to Stop Worrying and Love GAFA) /osgoode/iposgoode/2019/04/05/tech-tykes-and-teens-or-how-i-learned-to-stop-worrying-and-love-gafa/ Fri, 05 Apr 2019 15:24:05 +0000 https://www.iposgoode.ca/?p=3318 As the 2020 Presidential Primary begins to gather steam south of the border, US Senator Elizabeth Warren’s plan to break up big tech (Google, Amazon, and Facebook – she followed up later with a plan for Apple), has once again brought tech regulation into the political realm. But the real crux of the problem, the […]

The post Tech, Tykes and Teens (or: How I Learned to Stop Worrying and Love GAFA) appeared first on IPOsgoode.

]]>
As the 2020 Presidential Primary begins to gather steam south of the border, (Google, Amazon, and Facebook – she with a plan for Apple), has once again brought tech regulation into the political realm.

But the real crux of the problem, the source of tech companies’ economic and social clout, is papered over in only one sentence. It seems likely that’s not because Senator Warren’s team doesn’t care about the issue, but because when it comes to controlling how people consent to data collection, there don’t seem to be any easy answers. That’s especially apparent when it comes to how individuals, corporations, and governments have dealt with data coming from the most plugged in, yet also one of the most vulnerable segments of society – minors.

The Law on Privacy and Minors

In the US, online collection of personal data for children is primarily governed by the Federal , or COPPA, which is enforced by the Federal Trade Commission (FTC). Under the law, it is illegal to collect the data of children under the age of 13 without parental permission. Given the costs of complying with that kind of consent requirement, many companies simply take the position of disallowing children under 13 from using their platforms altogether.

Canada’s federal system, on the other hand, has led to a more complicated overlay of laws. , the Canadian Federal privacy law, has no specific provisions regarding consent of minors. Under their , the Office of the Privacy Commissioner of Canada (OPC), which administers PIPEDA, generally considers anyone under the age of 13 incapable of giving consent. However, when it comes to , Alberta, British Columbia, and Quebec stick to a strict case-by-case model, rather than any blanket age restriction.

This means that, on the Canadian Federal standard, parental consent is required for collecting online information from children under the age of 13. In the case-by-case model, a child must be able to understand “the nature and consequences of the exercise of the right or power in question”. The case-by-case model also applies federally to children over the age of 13.

Two issues come to mind: are the laws on the books actually working? And furthermore, does anyone understand the nature and consequences of how they are exercising their privacy rights, let alone children?

Recent Cases: Where the Law Applies and Where it Runs Out

For a look at how child consent enforcement works in practice, let’s again turn to the United States. On February 27th, 2019, ByteDance, the parent company of TikTok (formerly Musical.ly) . The civil penalty, which is the largest ever for a violation of this kind, was administered because it was found that TikTok had been facilitating the uploads and private messages of children under the age of 13. In fact, a found that 1 in 4 children had connected with a stranger through apps like TikTok, and 1 in 20 had been asked to strip by a stranger during a live stream. While the company has moved to stop users under 13 from uploading videos as a result of the fine, .

, though in this case it was targeting users age 13-35. Participants were secretly paid up to $20 per month to install a “Facebook Research” app, with the goal of collecting vast amounts of data from users’ phones. The app demanded root access to the device it was installed on, giving it virtually complete access to the device’s data: photos and videos, web searches, private messages and texts, and location monitoring – all of which Facebook could collect continuously, regardless of encryption. that participation was consent-based, and that of the less than 5 percent of users under the age of 18, all were required to provide parental consent forms. It has since discontinued the program.

It’s not clear what the long-term effect of the TikTok FTC fine will be. That is to say, it’s not obvious that cost of policing 500 million users is outweighed by a $5.7 million fine . Instead, the way things are today, it seems entirely possible that serious harm to children is an externality of an anonymous (and extremely profitable) internet.

On the other hand, while Facebook’s behaviour might be considered by some to be unsavoury, the question becomes whether there is anything that can be done about it (especially since, unlike the TikTok example, it was not unlawful conduct). What is clear, however, is that ‘consent’ is a nebulous idea, and one not easily grasped by parents, let alone children.

Is ‘Parental Consent’ Meaningful?

For the sake of argument, let’s set aside the thornier issue of how feasible enforcing informed consent actually is. Instead, the core issue is whether individuals are informed enough to consent in the first place.

Parents often underestimate just how influential the way they handle their children’s data can be. . Information as innocuous as sharing birthdays online can have devastating consequences for a child; , storing data while waiting for children to turn 18, then making credit card and loan applications in the child’s name.

‘Smart toys’ in the home are a problem in and of themselves. These devices are targets for hackers because they are Bluetooth and internet compatible with few safety mechanisms built-in (smart speakers have a similar problem, especially ones marketed to children). But despite concerns and breaches, .

None of this is meant to assign any ill-intent to parents, who have a hard-enough time raising their kids without worrying about hacked toys or policing their own social media presence to protect their children. Whether through ignorance or by choice, though, parents have demonstrated that they are not all that better than their kids when it comes to handling data. But is there any better way to structure the system?

How DO We Handle Collection of Online Information?

Given all the problems, one might think we should just ban collection of minors’ data altogether, regardless of parental consent. Unfortunately, there are several problems with this kind of thinking.

First of all, there is the practical issue of whether it would even be possible without fundamentally reworking the anonymity of the internet (who wants to sign up for an online account by handing over government ID?). The fact that companies have effectively treated their platforms as 13+ without much success, speaks to the difficulty of implementing any kind of platform-directed user vetting.

Secondly, it would still be difficult to prevent the collection of inferred data, such as what users search and view online, because consent is not directly needed for that kind of data collection.

Lastly, there are the economic implications. Online marketing and data-driven e-commerce are obviously massive fields, but it bears emphasizing just how massive – in 2018, the top six companies in the world by market capitalization were all tech-based (in order: Apple, Amazon, Alphabet, Microsoft, Facebook and Alibaba). Moving the internet to the speed of bureaucracy would therefore undoubtedly have knock-on effects to the global economy.

So should the alternative be, as per the OPC’s guidelines for teens, consent requiring an understanding of “the nature and consequences of the exercise of the right or power in question”? As should be clear by now, that does not seem to be a workable standard, given that even adults have a tough time keeping up with all the implications of the digital economy.

The Broader Problem with Privacy in the Digital Age

Returning to Senator Warren’s plan, one can see that, notwithstanding the financial merits or demerits of breaking up big tech, we aren’t really at the point as a society of having an informed discussion as to the trade-offs and moral decisions we must make if we want to continue life in an interconnected world.

Breaking up tech companies won’t address the question of who has access to private information (child or adult) and what they’re doing with it. It’s more of a sui generis problem, where the domination of an economic field by an oligopoly (the reason why some call for breaking up the banks) is fundamentally intertwined with an inability to have a broad, meaningful conversation, thanks to a lack public knowledge (similar to how inadequate civic education affects our ability to engage in political discourse). When you factor in concerns about cyberwarfare and election meddling, the ‘big-tech’ debate simultaneously hits at all the core themes of our current world: nationalism versus globalism, social reform versus retrenchment, and wealth inequality.

The question of the global digital economy is thus as much a distillation of the political, economic, and social narrative of the early 21st century as the Cold War was of the later 20th century. In that spirit, it may be prudent to take some lessons from Dr. Strangelove, which perhaps best captured the intractability and seeming futility presented by the nuclear age. What I’m left with is the uncomfortable realization that I myself don’t fully understand the implications of my consent and privacy in the digital world; but short of unplugging, it looks like we will keep riding this bomb and hope our choices work out in the end. Is it crazy? Perhaps. But it’s not unprecedented. After all, it only takes one letter to go from MAD to ad.

Written by Peter Werhun. Peter is an IPilogue Editor and JD Candidate at Osgoode Hall Law School

The post Tech, Tykes and Teens (or: How I Learned to Stop Worrying and Love GAFA) appeared first on IPOsgoode.

]]>
The Dark Side of Wearable Technology /osgoode/iposgoode/2019/03/07/the-dark-side-of-wearable-technology/ Thu, 07 Mar 2019 16:08:42 +0000 https://www.iposgoode.ca/?p=3250 In an earlier post, I discussed how wearables are becoming prominent in modern life, with Toronto being a notable hotspot for technology development and related interest. From a legal perspective, there are two main concerns with wearable technology: privacy and product liability. This instalment in the Toronto Wearables Series will focus on the former. The […]

The post The Dark Side of Wearable Technology appeared first on IPOsgoode.

]]>
In an , I discussed how wearables are becoming prominent in modern life, with Toronto being a notable hotspot for technology development and related interest. From a legal perspective, there are two main concerns with wearable technology: privacy and product liability. This instalment in the Toronto Wearables Series will focus on the former.

The with smart clothing is that the articles are constantly collecting, transmitting, and storing data, which means that they have information that is often considered personal, private, sensitive, or confidential. This makes smart clothing’s data mining abilities extremely strong. This is compounded by the fact that this information can easily be posted on social media networks, making it available to not only “friends” of the user, but possibly also to unknown or untrusted parties. Furthermore, wearables are able to collect information discreetly, otherwise known as data mining, which results in the users not actually knowing what data is being collected. This often means that users underestimate their privacy risks. In fact, a recent study showed that there is “a significant gap between reported concerns and actual users’ behaviors, reinforcing that users often sacrifice their privacy in exchange of benefits.” Put simply, the non-invasive biomedical, biochemical, and physical measurements of wearables have invasive implications for a user’s privacy. However, given the novelty of smart clothing, the extent of the impacts of these privacy concerns has not yet been fully understood. It is for this reason that empirical studies are necessary.

The same study collected a variety of online comments from users of wearables. Based on the consumer feedback, the study concluded that the primary privacy concerns are linked to the type of personal data that a given wearable device collects, stores, processes and shares. For example, there is a lower level of concern regarding smart accessories that are seen as a gadget (e.g. Fitbits), versus smart clothing that covers a large part of the body. Furthermore, embedded sensors, such as cameras and microphones, pick up data about the user and even people nearby, often without their awareness or consent. The nature of this data is frequently personal and confidential, which implicates privacy issues, especially with respect to surveillance. Other functions of wearables, such as heart rate monitors, glucometers, and activity trackers, can also be intrusive.

Interestingly, even though users perceived wrist-mounted devices as a non-invasive accessory from a privacy perspective, the study found a high associated risk. Indeed, there have been findings of an increased feeling of safety and confidence due to the user’s dependence on this type of wearable to track both biomedical data as well as daily movements that assist the user, such as the user’s location when in an unknown area. The ability to track location seems appropriate because of the convenience of having GSP at the ready. However, the communication of a user’s location information, without the control of the user, poses a substantial threat because once location is sensed and stored, it can then be shared online, in real time, through live social media feeds. Yet, given an appearance that is akin to a watch or a bracelet, wearables’ presence is often unnoticed, which means that the underlying privacy risk is not seen as a concern on a daily basis. Rather, a user more acutely senses its convenience benefits. This is in stark contrast with the more common smartphone, with which the user has a more conscious interaction.

In fact, integration is of smart clothing, which allows users to synchronize their clothing with their phones for the sake of convenience. From a privacy perspective, however, this means that all of the implications associated with smartphones are then added to the list of concerns regarding smart clothing. For example, more technologically-advanced smart clothing inventions could have access to a user’s photos, contacts, bank information, and applications, making all of the data, in addition to the collected biometrics, vulnerable to being shared publicly. Another notable example is that embedded speech recognition applications in both smartphones and smart clothing allow the convenience of hands-free interaction. However, the heightened sensitivity that is needed to be able to pick up on such demands means that even when a user is not alone, a potentially confidential conversation between the user and another party can be captured and stored, once again without knowledge or consent.

The above suggests two concerning points about the privacy risk associated with smart clothing. First, users are already anxious about a host of privacy issues, but the (perhaps more noticeable) benefits offered by these devices causes them to become more willing to sacrifice their privacy. Second, even though users have articulated some concerns, these are often misdirected or underestimated. This means that users do not know precisely what to worry about, and are therefore ill-equipped to protect themselves. Indeed, new applications, such as facial recognition software embedded in smart technologies offer such a profound sense of convenience and marketable novelty that consumers willingly allow a device to repeatedly capture and store every inch of their face. This misplaced sense of trust in smart technologies, and particularly smart clothing, presents a significant barrier to technological advancement, as users’ engagement is difficult to predict.

This is the second post in the Toronto Wearables Series by Saba Samanian regarding wearable technology and its IP and privacy law implications. Saba was recently appointed the Toronto Ambassador for and seeks to do her part in fostering the wearables community in Toronto.

 

Written by Saba Samanian, IPilogue Editor and JD Candidate at Osgoode Hall Law School.

The post The Dark Side of Wearable Technology appeared first on IPOsgoode.

]]>
The Tech Law Ultimatum: Consent or Exile? /osgoode/iposgoode/2018/11/16/the-tech-law-ultimatum-consent-or-exile/ Fri, 16 Nov 2018 16:43:32 +0000 https://www.iposgoode.ca/?p=2797 Living in the twenty-first century comes with the need to manage expectations. While we live in a modern age with a variety of technological advancements, we may not be as innovative as we previously imagined. After decades of television shows like The Jetsons, some may even be inclined to ask, “Where’s my jetpack?” Professor Daithí […]

The post The Tech Law Ultimatum: Consent or Exile? appeared first on IPOsgoode.

]]>
Living in the twenty-first century comes with the need to manage expectations. While we live in a modern age with a variety of technological advancements, we may not be as innovative as we previously imagined. After decades of television shows like The Jetsons, some may even be inclined to ask, “Where’s my jetpack?” Professor , during his at Osgoode Hall Law School this fall term, recently spoke about the challenging relationship between technological innovation and the law. Prof. Mac Síthigh addressed the technological advancements we have made and what is still on the inventive (and legal drafting) table in his “Help! MyJetpackis an Algorithm: Smart Cities, Sharing Economies, and Law in the Face of Disruption”.

Professor Mac Síthigh drew on Sadiq Khan’s, the Mayor of London, this year and stressed the important role the law has in relation to technological and social development. At SXSW, Khan explained that the law plays a balancing role in mitigating the potentially negative impact of disruption while allowing society to evolve.

The concept of “smart cities” is something that highlights how the law is performing in the face of twenty-first century “disruption”. Professor Mac Síthigh linked the smart city concept to the sharing economy, which he defined as a situation that deals with transforming under-utilized assets in a manner that makes them more accessible to a community. This could lead to a reduced need for individual ownership of these resources.

Citing a recent , Professor Mac Síthigh explored how the collection of data in these cities unveils new legal tensions. For example, Alphabet’s Sidewalk Labs is reimagining Toronto’s eastern waterfront area, . This variation of a smart city will use sensors to measure garbage disposal, recycling, noise, and pollution. The increased presence of cameras can even collect data to help improve the flow of traffic. While the project promises some of the twenty-first century innovations many have been waiting for, it also reveals how some of the risks of such technologies are underexplored.

There is an inherent trade off in collecting data to help cities become more efficient and green. Residents will be giving up their privacy rights for the good of society. There is no way to live off the grid in this type of environment, which means that if individuals want to be excluded from data collection, they would likely reside outside of this community. Is full consent or exile the only choice in the age of smart cities?

Currently, different Canadian laws may apply depending on which entity is collecting the data, thus presenting different methods of action for residents.

  1. If there is a commercial technology company collecting the data, the (PIPEDA) applies to these processes.
  2. When this data is collected, accessed, or used by federal government institutions, the applies.

Both of these acts regulate how personal information can be shared and this may be applicable to data collected through smart cities.

Research from the (CIPPIC) reveals one of the weaknesses of the law in their current forms. Where information is not “personal”, it can be freely shared with third parties. In order for data to be non-personal, technology companies would be required to strip the data of personal identifiers. So, the data on garbage disposal, for example, cannot be linked to any addresses, names, photographs, and so on in order for the information to be sharable. Another caveat in sharing personal information is that individuals can choose to protect their information through confidentiality terms in a contract. This means that there could be a great onus on the residents in smart cities to find ways to protect their information if they truly wish for their data to remain private.

As Professor Mac Síthigh’s talk makes clear, smart cities and the concept of a sharing economy are not new forms of technology, rather they are new processes that rely on data in novel ways. In the same way that technology companies have rethought data collection, it is necessary for lawyers and policy makers to rethink how the law applies to this newest iteration of technology. It requires a careful balance of the existing laws that seem applicable to smart cities, such as privacy laws, in addition to new provisions that give consumers more opportunities to protect and take control of their data without completely excluding them from the innovation process.

 

Summer Lewis is an IPilogue Editor and a JD candidate at Osgoode Hall Law School.

The post The Tech Law Ultimatum: Consent or Exile? appeared first on IPOsgoode.

]]>