POL106H1 S Essay Week 4 (Toluwanimi Davies)
POL106H1 S Essay Week 4 (Toluwanimi Davies)
POL106H1 S Essay Week 4 (Toluwanimi Davies)
Toluwanimi Davies
Uber Eats, and Taskrabbit, to expand their social networks and to make daily life tasks easier.
Once someone downloads an application, they are met with a pop-up: Terms and Service. The
Terms and Services is a reference document provided to all pending users of these digital
platforms and contains a long list of conditions that must be accepted at face value before the
user can use the application. These lists are often so long and filled with legal jargon, that users
are discouraged to read it thoroughly and just scroll to the bottom to accept the terms. User
comprehension and ability to provide consent that is informed varies significantly across
demographic groups that wish to use these online platforms. Once the terms have been accepted,
the user has immediately, and unknowingly, gives consent to the companies to access and collect
their personal information. Contrary to what some might believe, the purpose of the user
providing consent is to protect companies from any liabilities that can arise and not primarily to
protect user privacy [citation]. In this paper, I argue that while informed consent is a complex
process, it remains a fundamental aspect of a user’ privacy. I will discuss the two required
readings for the week “To Consent or to Not to Consent” and identify one strength and one
Lee and Zong’ article examinesexamine the ethical and legal implications of facial
recognition technology and the issue of consent in data collection (2019). It explores how
companies often rely on users' consent to shift the responsibility of safeguarding personal
information onto individuals. The article argues that consent alone is inadequate for addressing
the racial bias and unjust use of facial recognition technology. The authors suggest participatory
approaches to data set creation and algorithm design as a more inclusive and accountable
solution.
The strength of this paper to my argument is the exploration of the potential risks and
ethical concerns associated with the widespread use of facial recognition technology. The
authors emphasize that users might not fully be able to comprehend the potential and future
negative implications of this technology, and therefore are not able to give informed consent. To
give informed consent, a user must understand the potential risks and accept them. The authors
discuss how the facial recognition data collected by these digital platforms can be co-opted and
potentially misused by companies and law enforcement agencies. The author effectively raises
awareness about the risks to their future privacy that users may not understand when consenting.
There have been recent instances where the implementation of machine learning through
partnerships has resulted in a lack of privacy protection within the healthcare system. When
people access healthcare, the immediate concern is often towards the medical matters and not
privacy. Similarly, when people want to use digital platforms, their immediate concern is access
and privacy becomes an afterthought. This issue becomes more complex when healthcare meets
digital innovations. One example is the collaboration between DeepMind and the Royal Free
London NHS Foundation Trust in 2016 (Murdoch 2021) .). They used machine learning to aid in
managing acute kidney injury. Critics pointed out that patients had no control over the use of
their information, and the discussion on privacy impacts was insufficient. Ensuring that users are
duly informed of the potential risk is both immediate and future is fundamental to preserving
their privacy.
A weakness of the article is that the authors primarily focus on the negative implications
and potentially overlook the potential positive use cases of facial recognition technology.
Governments do run the risk of discouraging innovation by placing more restrictions on digital
companies to protect user privacy. However, there must be a balance between user responsibility
Authors Bashir et al. article focuses on the issue of informed consent and information
asymmetry in online privacy agreements (2015). It presents the results of a survey evaluating
users' knowledge and opinions on online privacy, highlighting the gap between what people say
and what they do to protect their personal information. The authors describe the privacy paradox
as how users experience insufficient comprehension but demonstrate high voluntariness in the
consent process. The article calls for alternative consent models and standardized agreements, as
well as increased accountability and public engagement in the creation and design of algorithms.
This text strengthens my argument towards the importance of informed consent because
it collected primary data and reports on the complexities of consent from usersusers’
perspectives. The survey covers a wide range of topics related to online privacy, including
knowledge levels, opinions, and behaviors. The survey focuses on both users’ knowledge and on
The inclusion of various demographic factors adds further corroboration that user comprehension
on two social media platforms: Facebook and Twitter. They used readability scores and reading
fluency measures to analyze the policies (Hanlon et al 2023). The findings indicated that the
length of the policies and the complexity of the language used made it unlikely for all users,
especially minors, to understand and provide informed consent. This raises ethical concerns
regarding the ability of all users provide informed consent. Similar toLike Bashir et al.’ article,
consent is complex because the provision of information does not automatically mean that the
A weakness of the text is lies in the survey design as the sample population is composed
primarily of individuals associated with a large Midwestern university in the United States.
While this focus on an academic setting may yield valuable insights into the behaviors and
opinions of students, staff, and faculty, it may limit the generalizability of the findings to a
broader population. The demographic characteristics of the participants, such as their high
education level, might skew the results and not fully capture the diversity of internet users. This
could underestimate how much protection users would require in the consent process.
The Children's Online Privacy Protection Act (COPPA) in the United States and the
General Data Protection Regulation (GDPR) have specific clauses regarding children's consent,
considering the maturity level of children when determining their capability to agree to the
handling of their personal information (Müller et al 2018). This assessment of maturity primarily
relies on their cognitive abilities to fully comprehend their legal standing and to provide consent
for specific actions. When it comes to the use of digital platforms, both COPPA and the GDPR
aim to provide children with special protection for the use of their personal data in commercial
purposes. Those who are elderly and those to which English is a second language make up
considerable proportions of the population, and like children, have a varying comprehension on
how personal data is processed, the consequences of personal data practices, and their legal
surrounding user comprehension and informed consent within online platforms. While the first
article effectively highlights the ethical concerns of facial recognition technology, tit could
benefit from a more balanced discussion to acknowledge the potential benefits. The second
article's well-structured survey design provides valuable insights into the gap between user
knowledge, opinions, and behaviors in online privacy agreements. As we continue to deal with
the repercussions of uninformed consent, promoting genuine informed consent requires not only
legal and ethical considerations, but also a user-friendly design to create an inclusive and
Lee, Crystal, and Jonathan Zong. “Consent Won’t Magically Fix Our Data Privacy Problems.”
Slate Magazine, August 30, 2019. https://slate.com/technology/2019/08/consent-facial-
recognition-data-privacy-technology.html.
Hanlon, Annmarie, and Karen Jones. “University of Toronto Libraries.” Ethical concerns about
social media privacy policies: do users have the ability to comprehend their consent
actions, July 7, 2023. https://www-tandfonline-com.myaccess.library.utoronto.ca/doi/full/
10.1080/0965254X.2023.2232817.
Custers, Bart, Francien Dechense, Walter Pieters, Bart Schermer, and Simone van der Hof.
“Consent and Privacy.” (Oxfordshire: The Routlege, 2018), 247–58.
https://doi.org/10.4324/9781351028264-23.
Murdoch, Blake. “Privacy and Artificial Intelligence: Challenges for Protecting Health
Information in a New Era.” SpringerLink, September 15, 2021. https://link-springer-
com.myaccess.library.utoronto.ca/article/10.1186/s12910-021-00687-3.
Bashir, Masooda, Carol Hayes, April D. Lambert, and Jay P. Kesan. “Proceedings of the
Association for Information Science and Technology.” Association for Information
Science & Technology, February 24, 2016. https://asistdl-onlinelibrary-wiley-
com.myaccess.library.utoronto.ca/doi/full/10.1002/pra2.2015.145052010043.