data mining Archives - IPOsgoode /osgoode/iposgoode/tag/data-mining/ An Authoritive Leader in IP Thu, 07 Mar 2019 16:08:42 +0000 en-CA hourly 1 https://wordpress.org/?v=6.9.4 The Dark Side of Wearable Technology /osgoode/iposgoode/2019/03/07/the-dark-side-of-wearable-technology/ Thu, 07 Mar 2019 16:08:42 +0000 https://www.iposgoode.ca/?p=3250 In an earlier post, I discussed how wearables are becoming prominent in modern life, with Toronto being a notable hotspot for technology development and related interest. From a legal perspective, there are two main concerns with wearable technology: privacy and product liability. This instalment in the Toronto Wearables Series will focus on the former. The […]

The post The Dark Side of Wearable Technology appeared first on IPOsgoode.

]]>
In an , I discussed how wearables are becoming prominent in modern life, with Toronto being a notable hotspot for technology development and related interest. From a legal perspective, there are two main concerns with wearable technology: privacy and product liability. This instalment in the Toronto Wearables Series will focus on the former.

The with smart clothing is that the articles are constantly collecting, transmitting, and storing data, which means that they have information that is often considered personal, private, sensitive, or confidential. This makes smart clothing’s data mining abilities extremely strong. This is compounded by the fact that this information can easily be posted on social media networks, making it available to not only “friends” of the user, but possibly also to unknown or untrusted parties. Furthermore, wearables are able to collect information discreetly, otherwise known as data mining, which results in the users not actually knowing what data is being collected.  This often means that users underestimate their privacy risks. In fact, a recent study showed that there is “a significant gap between reported concerns and actual users’ behaviors, reinforcing that users often sacrifice their privacy in exchange of benefits.” Put simply, the non-invasive biomedical, biochemical, and physical measurements of wearables have invasive implications for a user’s privacy. However, given the novelty of smart clothing, the extent of the impacts of these privacy concerns has not yet been fully understood. It is for this reason that empirical studies are necessary.

The same study collected a variety of online comments from users of wearables. Based on the consumer feedback, the study concluded that the primary privacy concerns are linked to the type of personal data that a given wearable device collects, stores, processes and shares. For example, there is a lower level of concern regarding smart accessories that are seen as a gadget (e.g. Fitbits), versus smart clothing that covers a large part of the body. Furthermore, embedded sensors, such as cameras and microphones, pick up data about the user and even people nearby, often without their awareness or consent. The nature of this data is frequently personal and confidential, which implicates privacy issues, especially with respect to surveillance. Other functions of wearables, such as heart rate monitors, glucometers, and activity trackers, can also be intrusive.

Interestingly, even though users perceived wrist-mounted devices as a non-invasive accessory from a privacy perspective, the study found a high associated risk. Indeed, there have been findings of an increased feeling of safety and confidence due to the user’s dependence on this type of wearable to track both biomedical data as well as daily movements that assist the user, such as the user’s location when in an unknown area. The ability to track location seems appropriate because of the convenience of having GSP at the ready. However, the communication of a user’s location information, without the control of the user, poses a substantial threat because once location is sensed and stored, it can then be shared online, in real time, through live social media feeds. Yet, given an appearance that is akin to a watch or a bracelet, wearables’ presence is often unnoticed, which means that the underlying privacy risk is not seen as a concern on a daily basis. Rather, a user more acutely senses its convenience benefits. This is in stark contrast with the more common smartphone, with which the user has a more conscious interaction.

In fact, integration is of smart clothing, which allows users to synchronize their clothing with their phones for the sake of convenience. From a privacy perspective, however, this means that all of the implications associated with smartphones are then added to the list of concerns regarding smart clothing. For example, more technologically-advanced smart clothing inventions could have access to a user’s photos, contacts, bank information, and applications, making all of the data, in addition to the collected biometrics, vulnerable to being shared publicly. Another notable example is that embedded speech recognition applications in both smartphones and smart clothing allow the convenience of hands-free interaction. However, the heightened sensitivity that is needed to be able to pick up on such demands means that even when a user is not alone, a potentially confidential conversation between the user and another party can be captured and stored, once again without knowledge or consent.

The above suggests two concerning points about the privacy risk associated with smart clothing. First, users are already anxious about a host of privacy issues, but the (perhaps more noticeable) benefits offered by these devices causes them to become more willing to sacrifice their privacy. Second, even though users have articulated some concerns, these are often misdirected or underestimated. This means that users do not know precisely what to worry about, and are therefore ill-equipped to protect themselves. Indeed, new applications, such as facial recognition software embedded in smart technologies offer such a profound sense of convenience and marketable novelty that consumers willingly allow a device to repeatedly capture and store every inch of their face. This misplaced sense of trust in smart technologies, and particularly smart clothing, presents a significant barrier to technological advancement, as users’ engagement is difficult to predict.

This is the second post in the Toronto Wearables Series by Saba Samanian regarding wearable technology and its IP and privacy law implications.  Saba was recently appointed the Toronto Ambassador for and seeks to do her part in fostering the wearables community in Toronto.

 

Written by Saba Samanian, IPilogue Editor and JD Candidate at Osgoode Hall Law School.

The post The Dark Side of Wearable Technology appeared first on IPOsgoode.

]]>
Breaking Up With Big Tech? /osgoode/iposgoode/2018/04/09/breaking-up-with-big-tech/ Mon, 09 Apr 2018 15:52:11 +0000 https://www.iposgoode.ca/?p=31583 This week, Facebook co-founder Mark Zuckerberg will make a long-awaited appearance on Capitol Hill. With Facebook under new and increased scrutiny in the United States (US) and United Kingdom (UK) following the Cambridge Analytica data breach, Facebook’s Chairman and Chief Executive Officer is set to be grilled by representatives of both the Senate and the […]

The post Breaking Up With Big Tech? appeared first on IPOsgoode.

]]>
This week, Facebook co-founder Mark Zuckerberg will make a long-awaited appearance on Capitol Hill. With Facebook under new and increased scrutiny in the United States (US) and United Kingdom (UK) following the , Facebook’s Chairman and Chief Executive Officer is set to be grilled by representatives of both the and the. The fallout from the Cambridge Analytica affair has spooked as well as , igniting a #deleteFacebook campaign and sending the company’s stock price . Questions from US lawmakers are likely to focus on fundamental issues surrounding how Facebook collects, protects, and commercializes user data on its platform. These matters strike at the heart of Facebook’s advertising revenue model, meaning that Zuckerberg’s congressional moment may potentially become an to his company’s operations as well as the data-driven operations of his peers in the technology industry.

Companies like Facebook, Google (Alphabet), Amazon, and Uber have long presented themselves as creative pioneers who collect and analyze massive amounts of user data to improve the human condition. Savvy marketing and personal acts of altruism have combined to create a calculated image of these companies as rebels and outsiders, doing no evil, working to leverage data analytics to disrupt tired and unimaginative incumbents in order to connect and empower the world. The tech community’s first major crisis occurred via the unbridled economic hype and enthusiasm presaging the , and current big tech companies may be similarly humbled by ongoing pricks to the veneer covering the structural deficiencies of their data-driven business practices. Recently, French President Emmanuel Macron has about the need to “dismantle […] these big giants” as a competition issue, and, here in Canada, there is a growing call for a that prioritizes domestic interests.

Facebook’s current time in the spotlight is just the most recent instance of big tech’s proclivity for moving fast and, unintentionally, breaking the wrong things. Zuckerberg may have inadvertently said as much himself in the immediate wake of the Cambridge Analytica revelations. In an interview with the New York Times, he , “If you had asked me, when I got started with Facebook, if one of the central things I’d need to work on now is preventing governments from interfering in each other’s elections, there’s no way I thought that’s what I’d be doing, if we talked in 2004 in my dorm room.”

Such a revelation may be an instructive lesson for a fresh-faced undergraduate student thinking through the implications of disruptive technologies for the first time. However, they are worrisome when the head of a global technology behemoth who has run the company for over a decade and has utters them.

But they’re not terribly shocking. Since the early 1990s, lawmakers and technologists have advanced the idea of increased connectivity through information and communication technologies (ICTs) as, what then-Secretary of State Hillary Clinton would call them some 20 years later, the . In with the New York Times, Zuckerberg echoed a similar sentiment to defend Facebook’s revenue model: “The thing about the ad model that is really important that aligns with our mission is that — our mission is to build a community for everyone in the world and to bring the world closer together. And a really important part of that is making a service that people can afford. […]Therefore, having it be free and have a business model that is ad-supported ends up being really important and aligned.” However, a from Facebook Vice President Andrew Bosworth that seemingly downplays “the ugly” side of Facebook’s activities effectively punctures this grandiose narrative. Today’s big tech firms have come of light-touch regulation from lawmakers and responded by giving normative and ethical concerns a back seat to connectivity and disruption.

More recently, though, legislators on both sides of the Atlantic have begun to rethink this arrangement. In the European Union (EU), next month’s enforcement date for the new will introduce heavy fines for companies that do not comply with harmonized data privacy regulations. And at a into Russian online disinformation activities during the 2016 Presidential election campaign, Senator Dianne Feinstein from Facebook, Twitter, and Google that “You created these platforms, and now they’re being misused. And you have to be the ones who do something about it—or we will.” Depending on the outcome of Zuckerberg’s appearances this week, the US Congress may begin to make good on Sen. Feinstein’s threat.

Regulating or, in the words of Macron, dismantling big tech will be no easy task. These companies have amassed large stores of data about our innermost feelings and have developed technologies that . These companies have also entranced governments with the promise of jobs and economic prosperity . It is imperative that any attempts to harness big tech for the public good are not done in a knee-jerk or . The challenges these companies and new emerging technologies pose require long-term and strategic thinking around the social, economic, ethical, and democratic impacts of our increasingly data-driven society.

 

Joseph F. Turcotte is a Senior Editor with the IPilogue and the  Coordinator. He holds a PhD from the Joint Graduate Program in Communication & Culture (Politics & Policy) at 첥Ƶ and Ryerson University (Toronto, Canada).

The post Breaking Up With Big Tech? appeared first on IPOsgoode.

]]>
Mining the Digital Gold Rush: The Legal (L)ore around France's Data-Mining Tax /osgoode/iposgoode/2013/02/12/mining-the-digital-gold-rush-the-legal-lore-around-frances-data-mining-tax/ Tue, 12 Feb 2013 17:52:43 +0000 http://www.iposgoode.ca/?p=20117 With markets in real property, personal property, and intellectual property quite cornered, the future-savvy lawyer might consider their cutting-edge cousin, if France's data-mining tax proposal has its way: what could be termed existential property*, courtesy of Google, Facebook, Amazon, and the like. Or rather, courtesy of their users, whose digitally collected personal data may be wholesale […]

The post Mining the Digital Gold Rush: The Legal (L)ore around France's Data-Mining Tax appeared first on IPOsgoode.

]]>
With markets in real property, personal property, and intellectual property quite cornered, the future-savvy lawyer might consider their cutting-edge cousin, if France's data-mining tax proposal has its way: what could be termed existential property*, courtesy of Google, Facebook, Amazon, and the like. Or rather, courtesy of their users, whose digitally collected personal data may be wholesale commoditized as a direct source of tax for the French government, according to a recent .

Background: “Google France checked in at Bermuda”

The  is the latest volley in an between France and internet behemoths such as and . Essentially, it has become common practice for these companies to operate with expenses (such as labour) concentrated in high-tax countries in the European Union, such as France and the UK, while routing most of their revenues through “tax havens” such as , the , and Bermuda, thereby avoiding an estimated average of 500 million euros per year in corporate tax, in France alone. The data-mining tax is one of several proposed solutions, following an attempted and controversial .

The Rationale: “User added a new job at Facebook, Google, Amazon, and Apple”

The rationale behind the tax recommendation, elaborated upon in Forbes by one of the , is as follows: Data plays such an that it may now be considered the “raw material” of the digital economy. Users provide what may soon be literally lifetimes of data in various forms online, whether collected through behaviour-tracking cookies, submitted through tweets and searches, or inferred through analytics. This allows online companies and applications to laser-target users through features and ads, monetizing the collated data. Thus, users themselves provide data that feeds back into the supply-production-distribution-consumption chain, and according to the report's authors, this turns users into employees whose unpaid labour of providing data produces value for these companies. This user-created value is unaccounted for, and should be taxed.

Implementation: “Facebook added 1 billion friends. Auditor poked Facebook.”

Since international tax law currently fails to account for the geography-heedless nature of user data-based business models, the data-mining tax (which the French government has yet to endorse), is meant as a step towards the report's proposed . The tax would apply to both international and domestic businesses that regularly and systematically monitor online user behaviours of those in France. Tax rates would depend on various factors: how many users are tracked, the type of data collected, ethical issues, and level of respect for user privacy and control, among others.

Analysis: “@User tweets about #Privacy #ConflictofInterest #Competition and #PublicUtilities”

The idea of taxing data-mining immediately brings a number of issues to mind, the first of which is suggestively indicated by other names for the proposal: some call it a or a “” policy. It may be problematic to create monetary incentives for corporations to respect user privacy, as it essentially commoditizes privacy (or the lack thereof) and may erode higher ideals of respecting privacy for its own sake; perhaps those who warrant the term “predator” should not be made to pay, but should restrain from undue preying altogether. From a practical perspective, the act itself of auditing companies' practices may involve questionably invasive technological practices, such as .

Second, tying government revenue to companies' privacy practices the way this tax would (where less user control means the government levies higher taxes) creates a potential conflict of interest, if the government is supposed to have citizens' best privacy interests at heart. Moreover, since the data belongs to the user, the labour model underlying the report's recommendation raises the thought that perhaps users themselves should be paid for it.

Third, the data-mining proposal prompts interesting connections between . As demonstrated by cases against , (), and , such companies walk a fine line between maintaining a healthy monopoly and engaging in anti-competitive practices. Incentivizing better privacy policies through taxes may put a damper on the endless reach for data to sell to advertisers, while creating room for smaller competitors who more effectively prioritize user privacy and control.

Fourth, turning data-mining into a source of taxation evokes questions about the role of privately owned technological platforms in the public sphere. Whether with or , such websites at times seem to approach the of . The problem is that regardless of sociological status, economically and structurally these companies are wholly private. This unique yet rising combination means that attempts to regulate the driving business model warrant particularly careful scrutiny, and perhaps a conversation about what such sites' status ought to be.

Finally, it bears remembering, first, that the companies in question are offering free services whose on a voluntary basis (though see the public utility debate linked above). Second, whether or not data-mining becomes taxable, Google, Facebook, et al. already and will continue to monitor and benefit from users' data regardless. One could then argue that the public may as well take advantage of that fact, in this case via taxation. As the old adage goes, after all, you are what you tweet.

Cynthia Khoo is a JD Candidate at the University of Victoria.

*Term coined for this post, based on the notion that the collected “property” is intangible (unlike real or personal property), yet not necessarily created or thought up (unlike intellectual property), but simply gleaned from users' data trails as they go about their daily lives on the internet.

The post Mining the Digital Gold Rush: The Legal (L)ore around France's Data-Mining Tax appeared first on IPOsgoode.

]]>