big data Archives - IPOsgoode /osgoode/iposgoode/tag/big-data/ An Authoritive Leader in IP Fri, 04 Nov 2022 16:00:00 +0000 en-CA hourly 1 https://wordpress.org/?v=6.9.4 Understanding Primary Law as Legal Data /osgoode/iposgoode/2022/11/04/understanding-primary-law-as-legal-data/ Fri, 04 Nov 2022 16:00:00 +0000 https://www.iposgoode.ca/?p=40163 The post Understanding Primary Law as Legal Data appeared first on IPOsgoode.

]]>

Anita GogiaĚý is a IPilogue Writer and a 2L JD Candidate at Osgoode Hall Law School.


During the Refugee Law Lab’s online seminar in September, students learned about how primary law can be used as data to increase its machine readability. Sarah Sutherland, the president and CEO of the Canadian Legal Information Institute (CanLII), discussed the role of technological solutions in legal work.

Sutherland Ěýfirst addressed how the structure of primary law affects its usage as data. Primary law is recorded and published in a way that can be referred to in ensuring proper process and governance. Legal data is “linguistic, narrational, situational, contextual, ambiguous, and process-dependant.” It is complex, just as human relationships are. When case-related documents are converted to legal data, it is semi-structured. While section headings are visible, it’s possible that the content within them is not marked or tagged in way that makes it efficient for document creation. This means the machine-readability is weak, as the content in the underlying document cannot be analyzed, requiring manual tagging.

Sutherland used Amazon to exemplify the impact when analyzing large amounts of data. Amazon, records millions of interactions and, through that data, they can extrapolate and predict what the user may be interested in purchasing. In some instances, Amazon can even prepare shipments before customers make purchases because of their forecasting accuracy. On the other hand, legal data lacks “yes/no” binary data and is incompatible with statistical analysis, despite employing large data sets. Specifically, in reference to legal data, “statistical methods may not have enough data points to give reliable results,” while “machine learning works well, and natural language processing techniques are more helpful.”

Statistical analysis requires a representative random sample — a criterion impossible for legal data to meet, as case law itself is non-random. Sutherland noted such arbitrariness existed at any level. For instance, “litigants are biased towards people who have money and high conflict complex problems, cases are selected by judges for which they will write and publish decisions, and matters heard by appellate courts are selected,” Overall, people who have legal problems are not random.

Language, specifically unintentional and intentional imprecision, is yet another reason it can be inefficient to use legal data. There is unintentional imprecision in language because of ambiguities in human syntax, as well as a lack of definition and/or precision about concepts. There is also intentional imprecision because not all situations can be anticipated and because of subjective human interpretations. For example, the legislature is mindful of the fact that if the drafted statute does not anticipate or is unclear about a specific set of facts, it will go to the courts who will apply their tools of statutory interpretation. Sutherland suggested that we should approach data the same way we approach creating statute — by accounting for imprecision.

Sutherland also discussed that machine-readable law can solve such problems through tagging and coding that makes the law explicit – this is referred to as “law as code.” Publishing law in such a way makes it widely usable across different applications that increase efficiency within the legal system. How can this be done? It was suggested that the law could be expressed in a literal way —law as code— and then in a human way. Rather than converting law expressed in human language to code, computer code would be converted to human language. Ěý

While it is unclear whether this would allow for more functionality with ambiguity in the law as law makers would require sophisticated levels of technical and computing skills to make it possible, there is potential to make the legal system more productive.

The post Understanding Primary Law as Legal Data appeared first on IPOsgoode.

]]>
Privacy Plight: Apple’s Proposed Changes & Consumer Pushback /osgoode/iposgoode/2021/09/07/privacy-plight-apples-proposed-changes-consumer-pushback/ Tue, 07 Sep 2021 16:00:19 +0000 https://www.iposgoode.ca/?p=38164 The post Privacy Plight: Apple’s Proposed Changes & Consumer Pushback appeared first on IPOsgoode.

]]>
Apple logo over people carrying screens

Photo by Jimmy Jin ()

Natalie BravoNatalie Bravo is an IPilogue Writer and a 2L JD Candidate at Osgoode Hall Law School.

Ěý

In August, Apple made headlines by . These new features are purported to expand protections for children through modified communication tools, on-device algorithm learning within , , and , and Search . Although protecting children as a vulnerable group should be of utmost importance to all, many security experts find some of these proposed changes troubling as they may undermine the company’s longstanding reputation in privacy preservation and enable future security .

Over the years, Apple has cultivated a strong reputation as a One of their core values and s is that After all, their security and privacy designs are so powerful that Apple allegedly can’t access encrypted user data—. In 2015, Apple CEO Tim Cook that while issues such as national security are important, Apple would not implement any technology which malicious actors could misuse as a backdoor to encrypted user data. Now, in 2021, Apple’s ironclad encrypted system has one exception.

As one of the changes, Apple intends to introduce photo-scanning technology for all users to identify any Child Sexual Abuse Material (CSAM). This well-intentioned technology is already widely used online to identify known explicit materials, including terrorist propaganda and other violent content. Some consumers worry that all their private images will be scanned in search of illegal content; however, Apple is not proposing that. The technology scans for the “” of a file and matches it to a known hash. If a certain threshold of known CSAM is found, barring false positives, then law enforcement is contacted. Strangely enough, Apple has noted that users can opt to disable photo uploads to iCloud, expressing that CSAM is only identified within their servers, and not on users’ devices. Some experts interpret this as

Some security experts expressed strong s over modified communication tools for children. Apple alleges that device software will detect any explicit content (not hashes) within a minor’s Messages conversations—a feature that can be turned on or off by a guardian. This will alert a parent if their minor has received any image that is flagged as explicit. This seems appropriate to allow some supervision to protect vulnerable children from online predators; however, the algorithms currently used to detect explicit images are . It is widely known that benevolent, non-sexual content, particularly , is consistently To add to this, child advocates worry about the possibility of minors in abusive households being monitored by such a faulty and algorithm.

Though is not a new concept, these changes will suddenly affect billions of consumers. It’s been reported that when a child, like any other user, experiences negative behaviour online, they . However, there is currently no way to report messages within Apple’s Messages application. . After causing a tremendous stir in both the privacy and child advocacy communities, Apple that Messages scanning would only apply to those under 13, not teenagers, and have attempted to offer limited clarity on the new technologies.

Despite the changes, . Children need to be protected and prioritized in terms of technology experience, but their privacy matters too. It will be interesting to see the roll-out of Apple’s polarizing changes, particularly how they will affect Apple’s reputation and ecosystem security and if Apple will introduce any more changes moving forward as it responds to community concerns.

The post Privacy Plight: Apple’s Proposed Changes & Consumer Pushback appeared first on IPOsgoode.

]]>
Who owns my privacy and why I don’t want people to know where I drive /osgoode/iposgoode/2019/11/18/who-owns-my-privacy-and-why-i-dont-want-people-to-know-where-i-drive/ Mon, 18 Nov 2019 16:26:50 +0000 https://www.iposgoode.ca/?p=34494 The post Who owns my privacy and why I don’t want people to know where I drive appeared first on IPOsgoode.

]]>
To me, “big data” has become synonymous with Big Brother, the central political figure behind data collection and monitoring in George Orwell’s “1984.” Big Data plays a similar role in today’s society, but it’s not often clear who or what organizations play this role, and what parts of my private data are being collected to violate my personal privacy. Following the revelation of the Facebook-Cambridge Analytica scandal in 2018, the public consciousness has grown to include privacy and data collection as primary concerns, but there hasn’t been much transparency, or changes in this practice.

In the Facebook- Cambridge Analytica scandal, people’s personal information was harvested without their consent and was used to create “psychographic”[1] profiles that were then used as part of political manipulation in the Brexit vote, as well as the 2016 American Election.[2] More recently, Facebook was caught outsourcing the transcription of audio-chats that occurred in Facebook Messenger to contractors working as a third-party, thus exposing the private conversations of those Facebook users who had opted into the transcription service offered by Facebook.[3]

But the collection of data occurs in all aspects of daily life not just through social media. A statistician working for Target explained how the retailer kept tabs on its customers’ purchasing habits and tried to target (no pun intended) pregnant women based on their purchasing patterns.[4] This led to a father learning that his teenage daughter was pregnant after Target sent maternity flyers addressed to the teen. These instances highlight the lack of consumer awareness, truly informed consent, and control that people have over their own information. This is a huge issue as “data flakes off us like dead skin cells”[5] and in those moments corporations should not be profiting.

To address the issue of data exploitation, the European Union has implemented the General Data Protection Regulation (“GDPR”)[6] that aims to protect a citizen’s privacy and increase the amount of control an individual has over their own data. The GDPR also addresses the transfer of EU citizens’ personal data outside of the EU in jurisdictions like Canada, that don’t have equivalent legislation already in place. Currently, the federal government has a measly two pieces of legislation governing Canadian’s privacy and personal data, the [7] and the (“P±ő±Ę·ˇ¶Ů´ˇâ€ť).[8] The Privacy Act applies to federal institutions and is meant to govern “a person’s right to access and correct personal information that the [Canadian] Government […] holds about them” while PIPEDA oversees the private sector. However, neither of these provide resources to Canadians to educate themselves or protect their data. PIPEDA outlines “”[10] but these fail to provide avenues for consumers to seek recourse when their privacy has been violated, or ways of monitoring and protecting their privacy. Canadians might be protected by the patch-work privacy laws that provinces have enacted, or the set by some Canadian courts.[11] Canada lags in data protection and empowering people to control their data.

This becomes increasingly important in the discussion of autonomous vehicles which are an emerging area of data collection. Data will need to be collected to ensure the safe operation of these vehicles and to prevent security breaches that could be exploited by ill-meaning entities. to the advancement of autonomous vehicles, but it is unclear what data has been collected, and how this data has been collected.[12] Deployment of these vehicles doesn’t make clear how future data will be collected and used but could lead to the evisceration of privacy altogether, which could have devastating consequences. I believe that in order to successfully and safely integrate autonomous vehicles, rigorous data regulation must first be developed and well-established before the day that my future car drives me to my destination.

The Facebook-Cambridge Analytica scandal threatened democracy, and now the potential improper collection and management of data could threaten our physical safety in the form of autonomous vehicles. The type of successful data regulation I foresee ought to require the informed consent, control, and awareness of consumers that the data collection is occurring.[13] No longer should “Terms and Condition” pages or mandatory cookie tracking agreements be used to exploit the average internet user.

Ěý

Written by Julianna Felendzer, Osgoode JD Candidate, enrolled in Professors D’Agostino and Vaver 2019/2020 IP & Technology Law Intensive Program at Osgoode Hall Law School. As part of the course requirements, students were asked to write a blog on a topic of their choice.

[1] Psychographic profiles rely on a consumer’s psychological characteristics to describe them, this encapsulates values, opinions, attitudes, interests, and lifestyles and can be used to personalize advertising. ĚýSee William D. Wells, “Psychographics: A critical review” (1975) Journal of Marketing Research 12 at 196.

[2] Issie Lapowsky, “How Cambridge Analytica Sparked the Great Privacy Awakening”, WIRED (17 March 2019), online: <www.wired.com> [perma.cc/VL7T-FRE6].

[3] Sarah Jeong, “No, Facebook Is Not Secretly Listening to You”, New York Times (20 August 2019), online: <www.nytimes.com> [perma.cc/8C9M-3E3F].

[4] Charles Duhigg, “How Companies Learn Your Secrets”, New York Times (16 February 2012) online: <www.nytimes.com> [perma.cc/ENG8-D47Z].

[5] Charlie Warzel, “The High Stakes of Living Online”, New York Times (6 August 2019), online: <www.nytimes.com> [perma.cc/JP7Q-D7KZ].

[6] EC, Commission Regulation (EC) 679/2016 of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), [2016] OJ, L 119/1.

[7] Privacy Act, RSC 1985, c P-21.

[8] ĚýPersonal Information Protection and Electronic Documents Act, SC 2000, c 5.

[9] Office of the Privacy Commissioner of Canada, “Summary of privacy laws in Canada”, online: <www.priv.gc.ca>Ěý [perma.cc/PC8J-XNJY].

[10] Office of the Privacy Commissioner of Canada, “PIPEDA fair information principles”, online: <www.priv.gc.ca> [perma.cc/5W8C-4ZYU].

[11] Douez v Facebook, Inc., 2017 SCC 33 [Douez].

[12] ĚýMagnolia Potter, “Big Data’s Role in Self-Driving Car Development” (5 April 2019), online: <www.insidebigdata.com> [perma.cc/CU5J-983F]. See also Bernard Marr, “BMW: Using Big Data And Artificial Intelligence To Create Autonomous Cars”, online: <www.bernardmarr.com> [perma.cc/Q62M-6BWA].

[13] Lucille Perreault, “Big Data and Privacy: Control and Awareness Aspects”(Paper delivered at the International Conference on Information Resources Management (CONF-IRM), Ottawa, 20 May 2015) [unpublished].

The post Who owns my privacy and why I don’t want people to know where I drive appeared first on IPOsgoode.

]]>
Is Privacy a Dead Letter? /osgoode/iposgoode/2019/10/30/is-privacy-a-dead-letter/ Wed, 30 Oct 2019 11:22:22 +0000 https://www.iposgoode.ca/?p=34365 The post Is Privacy a Dead Letter? appeared first on IPOsgoode.

]]>
In Osgoode Hall Law School’s course , professor Maura Grossman engages the class with weekly reflections on the legal and ethical issues with new advancements in technology. Based on class discussions and reading materials, students are encouraged to provide their opinions on specific questions each week. On one particular reflection, Professor Grossman assigned the following questions:

  • Is privacy a dead letter?
  • Regardless of whether or not you think so, how do we make the world safe and habitable under the present circumstances?

I argue that privacy protection, while at an all-time low in its enforcement, is not completely ineffectual. With better legislation and more technical solutions to strengthen the legal rules, privacy protection can improve.

In today’s age of technological connectivity, I do not believe it is realistic to hold organizations accountable to a standard of privacy as anonymity. Instead, privacy should be framed as the ability to control other people’s access to one’s personal information. So how much control do consumers have over their personal information? At this stage, businesses collect consumer data for purposes beyond the consumer’s control, such as . Even though companies constantly collect personal data, the current Canadian privacy framework provides guidance on how to personal information and even provides consumers some limited power through . Unfortunately, there is always the , which is the reconfiguration of de-identified data. With a number of new legislative frameworks on the rise, such as the , these processes on how to collect, store, and share data can only get more secure.

Contrary to my views, some commentators are of the perspective that if we cannot maintain anonymity completely, then we might as well abandon the concept of privacy. For example, John Suler, professor of psychology at Rider University in New Jersey that both a public and a private (or anonymous) self is necessary to form human identity and both need to be intact for our well-being. Further from Carnegie Mellon University in Pennsylvania found that people sought anonymity as a personal safeguard, which also helped them try new things and express themselves free from judgment. In a modern, connected world, however, I do not believe it is possible to be completely anonymous at all times, whether it be on the internet or just in physical public spaces. This is not inherently a negative thing, and I do not believe it is something everyone wants to feel at all times. Research on the “” demonstrates this point quite well. While people value privacy, they do little to preserve it. I believe that this is one of the issues that has led to the depletion of privacy protection. For a long time, consumers were not showing the government or technology companies that they cared about privacy. However, this was likely due to a lack of understanding surrounding long privacy policies written in complex legal jargon and the idea that if you do not consent to these policies then you cannot use the services.

Is it fair then that all of the onus falls on the individual to protect their privacy? There is a clear power imbalance between big technology companies who provide free, convenient services and individuals without any legal prowess or technical knowledge looking for cost-effective technology. It is very easy to click “I accept” to terms and services and privacy policies when an application is free and everyone you know also uses the service. This has led to the sharing of personal information where individuals had a reasonable expectation that their information would remain private—not shared with, or even sold to, third parties.

I believe that there is hope to turn this trend around. Take the example of California’s . Following the GDPR, this is the next iteration of legislation that can . The Consumer Privacy Act gives consumers the power to request the personal data that companies collect, how the data is used, and even prohibit the sale of their data to third parties. Most importantly, companies cannot charge consumers a different price for the product if they choose to exercise their privacy rights, thus erasing the dichotomy between privacy and convenience.

Strengthening privacy protection also requires further collaboration between regulators and those in the technology industry. One possibility is for the government to provide some form of incentive (likely monetary) to entrepreneurs to build more ethical technology companies. For example, provides consumers with a private alternative to Google Docs, which will not collect their personal information and encrypts all of their work. Another good example came from a with The Blockchain Hub at the Lassonde School of Engineering at żě˛ĄĘÓƵ. The panelists discussed the role the government plays in implementing and regulating new technology like “digital identity”. Essentially, this concept allows people to identify themselves electronically without paper documents. The most important part of this technology is that the individual is in control of their personal information and their information does not have to be stored on a central database that can be sold, or even hacked. In order for the technology to be used for good and adopted broadly, the government would have a key role in distributing digital IDs and providing a way to authenticate digital IDs (so we can trust people are who they say they are). The government could get specific guidance from technology experts and look to case studies of countries that have tried to use this technology previously, like , in order to improve the current state of privacy protections and implement new technology-driven solutions.

Written by Summer Lewis, a second year JD Candidate at Osgoode Hall Law School. Summer is also the Content Editor of the IPilogue.

This post was originally submitted as a reflection for the course and was reworked for publication with the IPilogue.

The post Is Privacy a Dead Letter? appeared first on IPOsgoode.

]]>
ICYMI: Highlights from Part 2 of IP Osgoode's Bracing for Impact AI Conference Series /osgoode/iposgoode/2019/04/08/icymi-highlights-from-part-2-of-ip-osgoodes-bracing-for-impact-ai-conference-series/ Mon, 08 Apr 2019 19:41:28 +0000 https://www.iposgoode.ca/?p=3332   On March 21, 2019, we had the pleasure of attending IP Ogsoode'sĚýBracing for Impact conference series held at the Toronto Reference Library. This year’s conference theme was data governance, with a focus on novel legal issues with respect to two key sectors - health/science and smart cities. Professor D’Agostino’s opening remarks touched on the […]

The post ICYMI: Highlights from Part 2 of IP Osgoode's Bracing for Impact AI Conference Series appeared first on IPOsgoode.

]]>
 

On March 21, 2019, we had the pleasure of attending IP Ogsoode'sĚý conference series held at the Toronto Reference Library. This year’s conference theme was data governance, with a focus on novel legal issues with respect to two key sectors - health/science and smart cities. Professor D’Agostino’s opening remarks touched on the legal and ethical dimensions of data governance, given the large amount of activity over the last year in the AI space.Ěý The day was broken down into five panel discussions, with a luncheon keynote by Professor Kang Lee from the University of Toronto.

Why is Data so Important to the Development of AI?

The first discussion focused on the impact of data quantity and quality which determine AI capability. Jonathan PenneyĚý(Assistant Professor of Law; Director, Law & Technology Institute, Schulich School of Law, Dalhousie University) provided three instances where data was more important than the AI systems themselves: in advancing AI, in addressing bias and discriminatory practices in existing systems, and in AI accountability and transparency to understand decision making. Notably, Alexander Wissner-Gross examined the last 30 years of AI development, and found that the recent advances were largely due to the availability of large data sets. In 2011, IBM Watson Jeopardy Champion used data from 8.6 million Wikipedia articles and in 2014, GoogleNet object classification used 1.5 million images on ImageNet to train its AI system. Carole PiovesanĚý(Partner & Co-Founder, INQ Data Law) Ěýechoed the importance of data to AI systems, and touched upon the two growing debates regarding data exchange and privacy.Ěý The crux of the privacy debate focuses on the trade-off between privacy as a quasi-constitutional value versus the importance of innovation and the need for data to produce public goods. She called upon the audience to think about what a fair exchange in today’s data marketplace means to them.Ěý Finally, the shifting policy led by the EU's adoption of the General Data Protection Regulation (GDPR) was discussed. In Canada, current regulations still focus mainly on consent. Both speakers acknowledged that we should be moving towards establishing standards as very few people actually enforce their rights.

Intellectual Property at a Crossroad

Three key ideas came out of the second panel discussion, namely, the issue of whether AI systems and programs are eligible for copyright or patent protection under current statutes, the international implications and developments, and the importance of AI in collaboration. Dave Green (Assistant General Counsel, IP Law & Policy, Microsoft)Ěýshared Microsoft’s perspective on AI’s role in enabling machine intelligence to simulate or augment elements of human thinking. Two copyright issues that come into play with AI are defining “Works of Authorship” and identifying whether specific types of “copying” are enough to create liability, both of which have been complicated by the use of computer programs and factual materials. Internationally, the requirement that humans be the authors of creative works is found in the constitutions of US, Hong Kong, India, New Zealand, in the UK and other countries. As technology and AI advances, do we want to continue to insist upon the requirement that authors of creative works be humans? If we don’t, what does that say about downstream issues such as intent, infringement, and liability? In regards to international approaches to data mining - should there be a fair dealing exception, particularly when you look at addressing the issue of bias? The WIPO recently established a new division that focuses purely on AI, which will be especially important given the spike in AI patenting activity that has occurred over the past several years. Shlomit Yanisky-Ravid (Faculty Member and Lecturer, ONO Academic Law School and Fordham Law School) challenged the audience with the Turing Test, proving that it is often difficult to identify between works created by AI or a human being.

Catherine Lacavera (Director of IP, Litigation and Employment, Google Inc.) shared her belief that the existing patent and copyright systems are robust enough to deal with changes we are seeing in AI, though the regulatory and social impact front of AI are changing at a fast pace. In this regard, it is important to balance social benefit with the potential for abuse and the importance of building diverse data sets and incorporating privacy and affordability in our design principles going forward. Maya Medeiros (Partner, Norton Rose Fullbright Canada LLP) stressed the importance of using IP rights to facilitate multi-party collaborations to protect AI innovation and incentivize collaborative behaviour. Furthermore, she raised the issues of fair dealing in data mining and the use of different types of IP rights to protect different aspects of works being generated.

Resolving Data Barriers

The third panelĚý focused on the tools required to access data and facilitates the development of AI.

Momin Malik (Data Science Postdoctoral Fellow, Berkman Klein Center for Internet & Society at Harvard University) discussed how AI is beneficial in certain contexts, such as for predicting behaviour. However, the data that is valuable for AI is often limited by access to copyright protected materials.Ěý For example, in the development of Google's information retrieval system, the company faced many copyright issues.Ěý However, they were able to successfully navigate the copyright challenges by entering into agreements with publishers to create , and ultimately make data more accessible to the public.

Paul Gagnon (Legal Counsel, Element AI)Ěýcontemplated whether sui generis legislation is the way forward. Europe, for example, relied on the existing concept of fair dealing as an exemption for data mining. However, this exemption is limited as it only applies to researchers and not commercial institutions. Having open data and accessible data are two distinct concepts. Accessibility does not necessitate that you can use the data. Uses may be restricted by specific purposes, such as “for academic use only”.

Dave Green concluded the panel discussion by contemplating whether copyright could “make nice with AI”. AI does not copy for the purpose of replicating the work or infringing on the underlying value of expression, but rather it can unlock different insights than “Works of Authorship”. This is the difference between the use of a photo as a work, for aesthetic purposes or factual reporting, and the use of a photo as data.Ěý Green looked at examples of how different jurisdictions are making copyright safe for AI and machine learning, such as the fair use exception in Israel. Democratizing the right to learn and research is essential to this field and it remains to be seen how other jurisdictions may embrace this fact.

Luncheon Keynote: Affective Artificial Intelligence & Law: Opportunities, Applications, and Challenges

Kang Lee (Professor and Tier 1 CRC Chair in developmental neuroscience, University of Toronto) amazed the audience with aĚýshowcase of his connected health venture, .Ěý Dr. Lee's interdisciplinary invention brings together research from neuroscience, psychology, physiology, and deep learning to produce AI that can detect, measure, and analyze human affect through physiological cues. The ™ mobile application turns smart devices into a personal health tool that individuals can use to manage stress and get updates on their personal health. It uses (TOI™), which uses video to recognize facial blood flow imaging from the human face. This image is then processed by ™, which is the AI that can detect and measure different human emotions. Dr. Lee’s work is significant as it demonstrates how AI can improve the health and science fields to give patients more control over their health care.

Big Data, Health & Science

The fourth panel discussion focused on the unique AI and data issues in the health and science sectors. James Elder (Professor, Lassonde School of Engineering; York Research Chair in Human and Computer Vision, żě˛ĄĘÓƵ) discussed potential uses for converting raw data into 2D images and subsequently converting these images into 3D models. 3D modelling with real data has applications for road and pedestrian traffic. The technology may also address some privacy concerns since his 3D virtualization technology turns the 2D images into avatars, which has the effect of anonymizing visual appearances. There are many opportunities for visual AI to help improve daily processes.

Victor Garcia (Managing Director & CEO, ABCLive Corporation)Ěýdiscussed how big data can transform the health sciences. Data helps to improve the way companies in this sector do business. Clinical, insurance claims, pharmaceutical, research and development, patient behaviour, and lifestyle data can all contribute a plethora of knowledge to the health sector. These can improve process efficiencies and make hospital resources available sooner to new patients. For example, Humber River Hospital used data analytics to improve their health care services and increase efficiency by 40%.

Ian Stedman (PhD Candidate, Osgoode Hall Law School; Fellow in AI Law & Ethics at SickKids’ Centre for Computational Medicine)ĚýhighlightedĚýSickKid's move to integrate AI into their practice with the development of a task force to examine how data governance and policies, infrastructure, AI solutions, and ethics interacted before implementing new AI tools.Ěý Stedman stressed that data source and quality are essential because in the health sector, it is essential to ask all the right questions to make accurate conclusions and diagnoses. With clinical studies, it is much easier to access data since there is a research plan, which includes the research purpose, the targeted population, and the results the researcher hopes to observe. However, with the data that AI relies on, in order to unlock its potential value, researchers study data to find patterns. Therefore, it is difficult to ask for secondary use disclosure before the research is conducted when the researcher may not know what they are looking for. The takeaway is that regardless of the industry, harmonization and collaboration are key. There is opportunity to put data together from different sources to discover the potential of new clinical decision making tools.

What Makes a Smart City?

In Toronto, and internationally, data privacy issues have come to the forefront of public discussion due to the development of smart cities. Given the proposed Sidewalk Toronto, the collection, storage, and use of data has led to a heated debate about data governance. The Mayor of Barrie, Jeff Lehman, discussed the project which calls upon start-ups and small organizations to develop new technologies that use data to address civic challenges. Instead of putting out a traditional municipal tender, the cities released a Request for Solutions and invited responses from the public to provide a cohesive opportunity for collaboration. In response to the issue of data localization in the Sidewalk Toronto debate, Mayor Lehman believes that consent is possible, but that the data must reside in Canada to ensure that the national government can set the rules around the data being collected. Finally, Mayor Lehman advocated for the use of Privacy Impact Assessments to evaluate the impact of new technology on privacy.

Neetika Sathe (Vice President, Advanced Planning, Alectra Inc.)Ěýadvocated for the importance of data policies regarding smart cities to be worked on at every level of government to develop a national data strategy. Furthermore, Sathe introduced the audience to some of Alectra’s projects and the data collection challenges associated with each. These projects included (end-to-end integrated EV workplace charging pilot project), the (which collects smart meter data), and the (which uses a private blockchain network that limits access to data).

Natasha Tusikov (Assistant Professor, Dept. Social Science, żě˛ĄĘÓƵ; The City Institute at żě˛ĄĘÓƵ)Ěýchallenged the audience to think about who should own, control and govern data related to smart cities.Ěý Prof. Tusikov discussed the issue of conflicting public and private authority, raising her concern that Waterfront Toronto is not an expert in IP, but in land development. As an example of regulating the governance of smart cities, Barcelona developed a manifesto outlining the importance of technological sovereignty and maintaining digital rights.

To close the panel discussion, John Weigelt (National Technology Officer, Microsoft Canada Inc.)Ěý spoke about the importance of solidifying the participants involved in developing a smart city and the business model we want to create. If employed correctly, AI will solve societal challenges. Municipalities and companies that can thoughtfully clarify their approach to AI first will prosper the most from its benefits.

The conference encouraged thought-provoking discussion about data governance and its implications on health and smart cities. We hope that the discussion about data collection and what we value as society continues beyond this event. Thoughtful and inclusive discussion will allow us to collectively brace for impact as AI technology continues to advance.

 

Written by Lauren Chan and Summer Lewis. Lauren Chan is an IPilogue editor and a business student at the University of Guelph, and Summer Lewis is an IPilogue editor and a JD candidate at Osgoode Hall Law School.

 

The post ICYMI: Highlights from Part 2 of IP Osgoode's Bracing for Impact AI Conference Series appeared first on IPOsgoode.

]]>
The Dark Side of Wearable Technology /osgoode/iposgoode/2019/03/07/the-dark-side-of-wearable-technology/ Thu, 07 Mar 2019 16:08:42 +0000 https://www.iposgoode.ca/?p=3250 In an earlier post, I discussed how wearables are becoming prominent in modern life, with Toronto being a notable hotspot for technology development and related interest. From a legal perspective, there are two main concerns with wearable technology: privacy and product liability. This instalment in the Toronto Wearables Series will focus on the former. The […]

The post The Dark Side of Wearable Technology appeared first on IPOsgoode.

]]>
In an , I discussed how wearables are becoming prominent in modern life, with Toronto being a notable hotspot for technology development and related interest. From a legal perspective, there are two main concerns with wearable technology: privacy and product liability. This instalment in the Toronto Wearables Series will focus on the former.

The with smart clothing is that the articles are constantly collecting, transmitting, and storing data, which means that they have information that is often considered personal, private, sensitive, or confidential. This makes smart clothing’s data mining abilities extremely strong. This is compounded by the fact that this information can easily be posted on social media networks, making it available to not only “friends” of the user, but possibly also to unknown or untrusted parties. Furthermore, wearables are able to collect information discreetly, otherwise known as data mining, which results in the users not actually knowing what data is being collected. ĚýThis often means that users underestimate their privacy risks. In fact, a recent study showed that there is “a significant gap between reported concerns and actual users’ behaviors, reinforcing that users often sacrifice their privacy in exchange of benefits.” Put simply, the non-invasive biomedical, biochemical, and physical measurements of wearables have invasive implications for a user’s privacy. However, given the novelty of smart clothing, the extent of the impacts of these privacy concerns has not yet been fully understood. It is for this reason that empirical studies are necessary.

The same study collected a variety of online comments from users of wearables. Based on the consumer feedback, the study concluded that the primary privacy concerns are linked to the type of personal data that a given wearable device collects, stores, processes and shares. For example, there is a lower level of concern regarding smart accessories that are seen as a gadget (e.g. Fitbits), versus smart clothing that covers a large part of the body. Furthermore, embedded sensors, such as cameras and microphones, pick up data about the user and even people nearby, often without their awareness or consent. The nature of this data is frequently personal and confidential, which implicates privacy issues, especially with respect to surveillance. Other functions of wearables, such as heart rate monitors, glucometers, and activity trackers, can also be intrusive.

Interestingly, even though users perceived wrist-mounted devices as a non-invasive accessory from a privacy perspective, the study found a high associated risk. Indeed, there have been findings of an increased feeling of safety and confidence due to the user’s dependence on this type of wearable to track both biomedical data as well as daily movements that assist the user, such as the user’s location when in an unknown area. The ability to track location seems appropriate because of the convenience of having GSP at the ready. However, the communication of a user’s location information, without the control of the user, poses a substantial threat because once location is sensed and stored, it can then be shared online, in real time, through live social media feeds. Yet, given an appearance that is akin to a watch or a bracelet, wearables’ presence is often unnoticed, which means that the underlying privacy risk is not seen as a concern on a daily basis. Rather, a user more acutely senses its convenience benefits. This is in stark contrast with the more common smartphone, with which the user has a more conscious interaction.

In fact, integration is of smart clothing, which allows users to synchronize their clothing with their phones for the sake of convenience. From a privacy perspective, however, this means that all of the implications associated with smartphones are then added to the list of concerns regarding smart clothing. For example, more technologically-advanced smart clothing inventions could have access to a user’s photos, contacts, bank information, and applications, making all of the data, in addition to the collected biometrics, vulnerable to being shared publicly. Another notable example is that embedded speech recognition applications in both smartphones and smart clothing allow the convenience of hands-free interaction. However, the heightened sensitivity that is needed to be able to pick up on such demands means that even when a user is not alone, a potentially confidential conversation between the user and another party can be captured and stored, once again without knowledge or consent.

The above suggests two concerning points about the privacy risk associated with smart clothing. First, users are already anxious about a host of privacy issues, but the (perhaps more noticeable) benefits offered by these devices causes them to become more willing to sacrifice their privacy. Second, even though users have articulated some concerns, these are often misdirected or underestimated. This means that users do not know precisely what to worry about, and are therefore ill-equipped to protect themselves. Indeed, new applications, such as facial recognition software embedded in smart technologies offer such a profound sense of convenience and marketable novelty that consumers willingly allow a device to repeatedly capture and store every inch of their face. This misplaced sense of trust in smart technologies, and particularly smart clothing, presents a significant barrier to technological advancement, as users’ engagement is difficult to predict.

This is the second post in the Toronto Wearables Series by Saba Samanian regarding wearable technology and its IP and privacy law implications.Ěý Saba was recently appointed the Toronto Ambassador for and seeks to do her part in fostering the wearables community in Toronto.

 

Written by Saba Samanian, IPilogue Editor and JD Candidate at Osgoode Hall Law School.

The post The Dark Side of Wearable Technology appeared first on IPOsgoode.

]]>
The Tech Law Ultimatum: Consent or Exile? /osgoode/iposgoode/2018/11/16/the-tech-law-ultimatum-consent-or-exile/ Fri, 16 Nov 2018 16:43:32 +0000 https://www.iposgoode.ca/?p=2797 Living in the twenty-first century comes with the need to manage expectations. While we live in a modern age with a variety of technological advancements, we may not be as innovative as we previously imagined. After decades of television shows like The Jetsons, some may even be inclined to ask, “Where’s my jetpack?”Ěý Professor DaithĂ­ […]

The post The Tech Law Ultimatum: Consent or Exile? appeared first on IPOsgoode.

]]>
Living in the twenty-first century comes with the need to manage expectations. While we live in a modern age with a variety of technological advancements, we may not be as innovative as we previously imagined. After decades of television shows like The Jetsons, some may even be inclined to ask, “Where’s my jetpack?”Ěý Professor , during his at Osgoode Hall Law School this fall term, recently spoke about the challenging relationship between technological innovation and the law. Prof. Mac SĂ­thigh addressed the technological advancements we have made and what is still on the inventive (and legal drafting) table in his “Help! MyĚýJetpackĚýis an Algorithm: Smart Cities, Sharing Economies, and Law in the Face of Disruption”.

Professor Mac Síthigh drew on Sadiq Khan’s, the Mayor of London, this year and stressed the important role the law has in relation to technological and social development. At SXSW, Khan explained that the law plays a balancing role in mitigating the potentially negative impact of disruption while allowing society to evolve.

The concept of “smart cities” is something that highlights how the law is performing in the face of twenty-first century “disruption”. Professor Mac Síthigh linked the smart city concept to the sharing economy, which he defined as a situation that deals with transforming under-utilized assets in a manner that makes them more accessible to a community. This could lead to a reduced need for individual ownership of these resources.

Citing a recent , Professor Mac Síthigh explored how the collection of data in these cities unveils new legal tensions. For example, Alphabet’s Sidewalk Labs is reimagining Toronto’s eastern waterfront area, . This variation of a smart city will use sensors to measure garbage disposal, recycling, noise, and pollution. The increased presence of cameras can even collect data to help improve the flow of traffic. While the project promises some of the twenty-first century innovations many have been waiting for, it also reveals how some of the risks of such technologies are underexplored.

There is an inherent trade off in collecting data to help cities become more efficient and green. Residents will be giving up their privacy rights for the good of society. There is no way to live off the grid in this type of environment, which means that if individuals want to be excluded from data collection, they would likely reside outside of this community. Is full consent or exile the only choice in the age of smart cities?

Currently, different Canadian laws may apply depending on which entity is collecting the data, thus presenting different methods of action for residents.

  1. If there is a commercial technology company collecting the data, the (PIPEDA) applies to these processes.
  2. When this data is collected, accessed, or used by federal government institutions, the applies.

Both of these acts regulate how personal information can be shared and this may be applicable to data collected through smart cities.

Research from the (CIPPIC) reveals one of the weaknesses of the law in their current forms. Where information is not “personal”, it can be freely shared with third parties. In order for data to be non-personal, technology companies would be required to strip the data of personal identifiers. So, the data on garbage disposal, for example, cannot be linked to any addresses, names, photographs, and so on in order for the information to be sharable. Another caveat in sharing personal information is that individuals can choose to protect their information through confidentiality terms in a contract. This means that there could be a great onus on the residents in smart cities to find ways to protect their information if they truly wish for their data to remain private.

As Professor Mac Síthigh’s talk makes clear, smart cities and the concept of a sharing economy are not new forms of technology, rather they are new processes that rely on data in novel ways. In the same way that technology companies have rethought data collection, it is necessary for lawyers and policy makers to rethink how the law applies to this newest iteration of technology. It requires a careful balance of the existing laws that seem applicable to smart cities, such as privacy laws, in addition to new provisions that give consumers more opportunities to protect and take control of their data without completely excluding them from the innovation process.

 

Summer Lewis is an IPilogue Editor and a JD candidate at Osgoode Hall Law School.

The post The Tech Law Ultimatum: Consent or Exile? appeared first on IPOsgoode.

]]>
5G Networks Promise More Innovation and Disruption — But They Promise More Regulatory Discussions, Too /osgoode/iposgoode/2018/07/25/5g-networks-promise-more-innovation-and-disruption-but-they-promise-more-regulatory-discussions-too/ Wed, 25 Jul 2018 18:33:45 +0000 https://www.iposgoode.ca/?p=31990 On June 4 to 6, 2018, the 2018 Canadian Telecom Summit (“CTS 2018”) featured discussions on the rise of a new protagonist in the information domain — 5G wireless networks. The event provided scope for continuing the conversation on how to support 5G deployment and what the new technology will mean to entrepreneurs, innovators, the […]

The post 5G Networks Promise More Innovation and Disruption — But They Promise More Regulatory Discussions, Too appeared first on IPOsgoode.

]]>
On June 4 to 6, 2018, the (“CTS 2018”) featured discussions on the rise of a new protagonist in the information domain — 5G wireless networks. The event provided scope for continuing the conversation on how to support 5G deployment and what the new technology will mean to entrepreneurs, innovators, the economy, and all Canadians. Here are some developments to watch as the process for setting 5G standards is underway.

All things smartened up

Ěý5G is the next generation of wireless mobile networks. The speakers highlighted that 5G networks are primarily designed for increasing capacity and enhancing connectivity while operating at much lower latency values. These ultrafast 5G airwaves promise to through the sharing of information—as buildings, cars, people, and a myriad of devices will be able to communicate with each other. The new technology promises to further enhance the users’ experience with smart devices, smart cars, and smart homes. In addition, 5G networks will allow emerging technologies to operate at a much larger scale.

From connecting people to connecting things in real time

ĚýFor example, the panel on 5G networks remarked that, due to the myriad of new capabilities and disruptive applications made available using 5G wireless networks, virtually all industries will experience important changes in how they work and cooperate with one another. Industries will leverage real-time connectivity to the benefit of both consumers and businesses. New capabilities will allow segments of industries to experience real-time economic data, offering the potential to prompt the . To enable all of these features, 5G network communications will increasingly rely on artificial intelligence (“AI”) and big data to build new applications and create more services. As a result, interested parties will have to be attentive as a new regulatory landscape may develop to accommodate the demands of this increased data sharing reality.

How does society become 5G ready?

ĚýIn addressing this question, speakers at CTS 2018 remarked that putting the new technology to use will require several regulatory and policy discussions. Among other challenges, there will be massive amounts of data that can be quickly collected, mobilized, and exchanged across 5G networks. In the commercial and government space, several jurisdictions have engaged in devising regulatory frameworks for the deployment of the new technology (see examples and ).

Canadian policy makers know that this is an industry that needs investment as well as a modern regulatory landscape across municipal, provincial, and federal levels of government. For example, in his , the Honourable , the federal Minister of Innovation, Science and Economic Development, announced that the Government of Canada will be launching two consultationsĚý to support 5G deployment:Ěýthe government is to add an additional 1GHz of millimetre spectrum to support 5G and is beginning a in advance of the 3500 MHz auction.Ěý Minister Bains also discussed an initiative through a Canada-QuĂ©bec-Ontario partnership, (“ENCQOR”),Ěýwhich he described as “a 5G test bed that will advance the development of 5G networking solutions and next-generation technologies and applications”.

Other countries, such as the are taking a great leap towards leadership in the manufacture and operation of network infrastructure. In Canada, industry operators and other stakeholders are well positioned to develop relationships with different levels of the government with a view to speeding up the process of creating standards and laying out best practices for the operation of 5G networks.

Bottom line

5G networks promises innovation, disruption, as well as policy and regulatory discussions. Industry operators and other stakeholders should be attentive to the new opportunities arising from 5G networks, but they should also stay abreast of the impact the Canadian regulatory landscape may have in the industry — particularly by virtue of the challenges that may arise from the vastness amount of data that will be quickly collected, mobilized, and exchanged across 5G networks.

 

Bruna D. Kalinoski is a contributing editor for the IPilogue and holds an LLM from the Osgoode Professional Development Program at żě˛ĄĘÓƵ.

The post 5G Networks Promise More Innovation and Disruption — But They Promise More Regulatory Discussions, Too appeared first on IPOsgoode.

]]>
Highlights from Canadian Telecom Summit 2018 /osgoode/iposgoode/2018/07/23/highlights-from-canadian-telecom-summit-2018/ Mon, 23 Jul 2018 16:54:52 +0000 https://www.iposgoode.ca/?p=31964 The Canadian Telecom Summit has been one of the largest annual meetings of telecom professionals in Canada for nearly twenty years. This year’s summit, from June 4 – 6 in Toronto, featured wide-ranging discussions including leading telecom executives from Canada, the U.S. and Europe and government officials on the major issues and goals facing the […]

The post Highlights from Canadian Telecom Summit 2018 appeared first on IPOsgoode.

]]>
The has been one of the largest annual meetings of telecom professionals in Canada for nearly twenty years. This year’s summit, from June 4 – 6 in Toronto, featured wide-ranging discussions including leading telecom executives from Canada, the U.S. and Europe and government officials on the major issues and goals facing the many players in telecommunication. These included panels focusing on: 1) preparing Canada for 5G data coverage; 2) the need for telecoms to partner with big data firms as households become increasingly digitally connected (or “smart”); 3) the future of privacy and data security for customers, and; 4) the Federal Government’s priorities for the sectors, including bringing greater access to affordable data for urban, rural, and Indigenous communities.

I. Preparing Canada for 5G Coverage

5G data coverage, which is the next generation of wireless data services, promises to make it possible for cities to become , as buildings, utilities, and people will be able to constantly share data. This connectivity can make cities more efficient by allowing businesses and government to mine this data, discover inefficiencies and redundancies, and correct them. Canada promises to be a major part of this initiative, with Google’s intention to build a , which will feature a fully interconnected neighbourhood.

The jurisdictions that can achieve 5G coverage will have a competitive edge in attracting new technologies and business opportunities that can take advantage of this new interconnectivity. , the CTO of Telus, noted that Canada needs a fully allocated 5G spectrum to take advantage of these opportunities. Mr. Gideon lamented that the Canadian Radio-television and Telecommunications Commission (CRTC) has not yet fully auctioned the 3.5 – 4GHz spectrum, where 5G will be broadcasted, and this puts Canada at risk of falling behind other jurisdictions. Not only have Western counties like the US, UK, EU and New Zealand allocated or planned out this spectrum, but so have Saudi Arabia, India, China, Japan and South Korea have as well. In his keynote, Mr. Gideon called on the Federal Government to create a clear strategy and timeline on how the 5G spectrum will be allocated and when; only once this is done can businesses seize on the new opportunities that 5G offers.

II. Big Data Firms – The Home Invaders

, EVP of , a major US business-to-business telecom service in the Internet of things (“IoT”) space, spoke of the need for telecoms to partner with large technology companies like Apple and Amazon so that Internet service providers are not left behind by the coming technology changes.

As major technology firms develop new IoT applications, like Amazon with Alexa and Google with Home, these companies are creating new interactions inside their customers’ homes, which are new opportunities to connect with their customers and build goodwill. Since these IoT devices rely on Internet connections to work, telecoms are a crucial part of this experience. However, Amazon and Google will reap the rewards of positive customer interactions and when they don’t work, the telecoms are blamed.

This leads to a situation where technology firms will benefit from the goodwill and telecoms will continue to be viewed as a necessary evil to facilitate online-based services. Telecoms suffer from as no customer loyalty is developed when IoT devices work, but when the devices don’t work the telecoms take the blame. One solution is for telecoms to partner with these firms, allowing them to piggyback off the brand building they are engaged in. If Rogers can offer a “Rogers + Google” service, there is more likely to be a positive customer association with the Rogers brand every time Google Home helps a consumer. This strategy is increasing with Fido packaging a with their phone plans and a in 2014.

III. Privacy and the Digital Footprint

While Mr. Weening’s presentation opened up many interesting possibilities for the future of telecom service, as a law student I couldn’t help but be concerned by the . The panel on privacy and information security focused on the implications for these new services and the need to evolve consumers’ digital footprint beyond a mere email and password combination.

, a Senior Fellow at Ryerson University’s Ted Rogers Leadership Centre and the former Privacy Commissioner of Ontario, noted that over the past two years, the percentage of consumer concerned about the privacy of their information has . Ms. Cavoukian pointed out that is more important and profitable for telecoms to build trust with their customers regarding the integrity and privacy of their personal information than to collect as much data as possible. She noted that when customers are informed that their data is private and only used for a specific purpose, they are more likely to consent to future requests for uses of their data in different ways.

Other panelists noted that since Canada adopted the Internet earlier than most countries, its and out of touch with the internet. Jurisdictions that were slower to adopt the Internet, like the EU, have observed the effects the Internet has on society and have had an easier time legislating accordingly.

A related panel, “Cultivating an Innovation Economy” discussed how telecoms need to help facilitate a revolution in digital identity. One of the biggest cybersecurity problems is that people protect their valuable data with an easily hacked email and password combination. However, smartphones are complex computers capable of acting as a digital fingerprint for online services. Telecoms that can create a secure digital identity for their customers could have a strong competitive edge as privacy and information security becomes a greater concern for consumers.

IV. Minister Bains on Connecting the Arctic and Rural Canada

The 2018 Canadian Telecom Summit ended with a keynote speech from the Honourable , Minister of , the department in charge of the Telecommunications Act.

Minister Bains spoke of the government’s partnership with Bell to bring , connecting Inuit communities to the rest of Canada. The construction of over 15 cell towers across the Northwest Territories and Nunavut could also help stimulate commercial investment in Canada’s North beyond the traditional natural resource extraction industries.

Minister Bains also introduced the new , where the government is planning to extend data coverage to many remote rural communities in Canada that currently have no data coverage. This initiative will also provide up to 50,000 low income families with a personal computer and access to a low-cost public internet plan for $10 per month. The goal of this initiative is to help alleviate isolation and poverty in rural communities by connecting them with urban Canada and creating new opportunities in their community.

V. Conclusion

This year’s Canadian Telecom Summit showed that the commercial opportunities created by telecommunication continue to broaden and that Canada is far from a global leader in this area. At the same time, there is a clear sense that the current government wants to make Canada’s technology economy more competitive and ensure that issues of privacy and accessibility are addressed. There is great promise in the commercial opportunities in this space, but measures like a clear spectrum allocation strategy and more competition in the telecom space is needed to spur more growth in this area.

The post Highlights from Canadian Telecom Summit 2018 appeared first on IPOsgoode.

]]>
Breaking Up With Big Tech? /osgoode/iposgoode/2018/04/09/breaking-up-with-big-tech/ Mon, 09 Apr 2018 15:52:11 +0000 https://www.iposgoode.ca/?p=31583 This week, Facebook co-founder Mark Zuckerberg will make a long-awaited appearance on Capitol Hill. With Facebook under new and increased scrutiny in the United States (US) and United Kingdom (UK) following the Cambridge Analytica data breach, Facebook’s Chairman and Chief Executive Officer is set to be grilled by representatives of both the Senate and the […]

The post Breaking Up With Big Tech? appeared first on IPOsgoode.

]]>
This week, Facebook co-founder Mark Zuckerberg will make a long-awaited appearance on Capitol Hill. With Facebook under new and increased scrutiny in the United States (US) and United Kingdom (UK) following the , Facebook’s Chairman and Chief Executive Officer is set to be grilled by representatives of both the and the. The fallout from the Cambridge Analytica affair has spooked as well as , igniting a #deleteFacebook campaign and sending the company’s stock price . Questions from US lawmakers are likely to focus on fundamental issues surrounding how Facebook collects, protects, and commercializes user data on its platform. These matters strike at the heart of Facebook’s advertising revenue model, meaning that Zuckerberg’s congressional moment may potentially become an to his company’s operations as well as the data-driven operations of his peers in the technology industry.

Companies like Facebook, Google (Alphabet), Amazon, and Uber have long presented themselves as creative pioneers who collect and analyze massive amounts of user data to improve the human condition. Savvy marketing and personal acts of altruism have combined to create a calculated image of these companies as rebels and outsiders, doing no evil, working to leverage data analytics to disrupt tired and unimaginative incumbents in order to connect and empower the world. The tech community’s first major crisis occurred via the unbridled economic hype and enthusiasm presaging the , and current big tech companies may be similarly humbled by ongoing pricks to the veneer covering the structural deficiencies of their data-driven business practices. Recently, French President Emmanuel Macron has about the need to “dismantle […] these big giants” as a competition issue, and, here in Canada, there is a growing call for a that prioritizes domestic interests.

Facebook’s current time in the spotlight is just the most recent instance of big tech’s proclivity for moving fast and, unintentionally, breaking the wrong things. Zuckerberg may have inadvertently said as much himself in the immediate wake of the Cambridge Analytica revelations. In an interview with the New York Times, he , “If you had asked me, when I got started with Facebook, if one of the central things I’d need to work on now is preventing governments from interfering in each other’s elections, there’s no way I thought that’s what I’d be doing, if we talked in 2004 in my dorm room.”

Such a revelation may be an instructive lesson for a fresh-faced undergraduate student thinking through the implications of disruptive technologies for the first time. However, they are worrisome when the head of a global technology behemoth who has run the company for over a decade and has utters them.

But they’re not terribly shocking. Since the early 1990s, lawmakers and technologists have advanced the idea of increased connectivity through information and communication technologies (ICTs) as, what then-Secretary of State Hillary Clinton would call them some 20 years later, the . In with the New York Times, Zuckerberg echoed a similar sentiment to defend Facebook’s revenue model: “The thing about the ad model that is really important that aligns with our mission is that — our mission is to build a community for everyone in the world and to bring the world closer together. And a really important part of that is making a service that people can afford. […]Therefore, having it be free and have a business model that is ad-supported ends up being really important and aligned.” However, a from Facebook Vice President Andrew Bosworth that seemingly downplays “the ugly” side of Facebook’s activities effectively punctures this grandiose narrative. Today’s big tech firms have come of light-touch regulation from lawmakers and responded by giving normative and ethical concerns a back seat to connectivity and disruption.

More recently, though, legislators on both sides of the Atlantic have begun to rethink this arrangement. In the European Union (EU), next month’s enforcement date for the new will introduce heavy fines for companies that do not comply with harmonized data privacy regulations. And at a into Russian online disinformation activities during the 2016 Presidential election campaign, Senator Dianne Feinstein from Facebook, Twitter, and Google that “You created these platforms, and now they’re being misused. And you have to be the ones who do something about it—or we will.” Depending on the outcome of Zuckerberg’s appearances this week, the US Congress may begin to make good on Sen. Feinstein’s threat.

Regulating or, in the words of Macron, dismantling big tech will be no easy task. These companies have amassed large stores of data about our innermost feelings and have developed technologies that . These companies have also entranced governments with the promise of jobs and economic prosperity . It is imperative that any attempts to harness big tech for the public good are not done in a knee-jerk or . The challenges these companies and new emerging technologies pose require long-term and strategic thinking around the social, economic, ethical, and democratic impacts of our increasingly data-driven society.

 

Joseph F. Turcotte is a Senior Editor with the IPilogue and theĚýĚýCoordinator. HeĚýholds a PhD from the Joint Graduate Program in Communication & Culture (Politics & Policy) at żě˛ĄĘÓƵ and Ryerson University (Toronto, Canada).

The post Breaking Up With Big Tech? appeared first on IPOsgoode.

]]>