EMPOWERING SOCIAL MEDIA LAW: ADDRESSING THE URGENT NEED FOR ROBUST REGULATION IN THE DIGITAL AGE BY - DR. NEWAL CHAUDHARY
EMPOWERING
SOCIAL MEDIA LAW: ADDRESSING THE URGENT NEED FOR ROBUST REGULATION IN THE
DIGITAL AGE
AUTHORED BY
- DR. NEWAL CHAUDHARY[1]
Abstract:
With the rapid rise of social media
platforms in the digital age, the need for specific laws and regulations to
address the challenges and risks posed by these platforms has become
increasingly apparent. This article explores the necessity for social media law
and regulation, highlighting the potential issues of privacy infringements,
misinformation, hate speech, cyberbullying, online harassment, and the spread
of harmful content. The article
evaluates the existing legal frameworks and their limitations in effectively
regulating social media platforms. It also examines global approaches taken by
different countries in developing social media laws and the challenges faced in
striking a balance between freedoms of speech and protecting users from online
harm.
Furthermore, the article discusses
the role of social media platforms in content moderation and the
responsibilities they should assume in ensuring a safe and accountable online
environment. It explores potential solutions and mechanisms, including
transparency requirements, user data protection, algorithmic accountability,
and effective moderation practices that can be incorporated into social media
laws to address these concerns.
Ultimately, the article emphasizes the importance of comprehensive
social media legislation to safeguard individuals' rights, protect vulnerable
groups, promote responsible behavior, and maintain the integrity of online
communication in the digital age. By addressing the specific needs for
regulation, social media law can enhance user safety, foster a healthier online
environment, and uphold fundamental principles of democracy and human rights.
Keywords: Social media regulation, Digital
age legislation, Content moderation, Cyberbullying laws, Data privacy
regulations.
I.
Introduction
In today's digital age, social media
platforms have become an integral part of our lives. These platforms, such as Facebook,
Twitter, Instagram, and others, have revolutionized the way we connect,
communicate, and share information. They provide a space for individuals,
communities, and organizations to interact, express themselves, and stay
updated on current events. The significance of social media platforms lies in
their ability to connect people from different corners of the world, facilitate
the spread of ideas, and amplify voices that were previously marginalized. Social
media facilitates the sharing of ideas and information through virtual
networks. From Facebook and Instagram to Twitter and YouTube, social media
covers a broad universe of apps and platforms that allow users to share
content, interact online, and build communities. More than 4.7 billion people use
social media, equal to roughly 60% of the world’s population[2]. Social
media has become an integral part of our daily lives, with millions of people
around the world using social media platforms to connect with friends, family,
and colleagues. Social networking platforms allow users to share photos,
videos, and updates with their followers, while also connecting with others in
their social network. Social media platforms also offer a range of tools for
businesses and organizations to connect with their audience and engage with
customers in new ways. Digital marketing has been transformed by the rise of
social media, with businesses now able to reach out to their target audience
through targeted advertising and social media campaigns. Social media allows
businesses to build relationships with their customers and create engaging
content that can help to drive brand awareness and loyalty.
Social media platforms have also
given rise to online communities, where users can connect with others who share
similar interests, hobbies, or experiences. These online communities offer a
way for people to connect and engage with others from around the world,
regardless of geographical location[3]. However,
the increasing influence and impact of social media platforms have raised
concerns about the challenges and risks they bring. The need for specific laws
and regulations to address these issues has become evident. While social media
platforms have facilitated the democratization of information and empowered
individuals, they have also given rise to various challenges. One of the
significant challenges is the spread of misinformation and fake news. With the
ease of sharing information on social media, false or misleading content can
quickly go viral, leading to harmful consequences. Misinformation can sway
public opinion, disrupt political processes, and even incite violence. The
dissemination of hate speech, cyberbullying, and online harassment is another
pressing issue that needs to be addressed. The anonymity and reach of social
media platforms have emboldened individuals to engage in harmful behaviors,
causing harm to others and undermining online safety. Furthermore, the
collection and use of personal data by social media platforms have raised
concerns about privacy and data protection. These platforms gather vast amounts
of user data, often without explicit consent, and utilize it for targeted
advertising and other purposes. This practice has raised questions about the
transparency of data collection, user consent, and the potential misuse of
personal information. Social media platforms have become a vital part of our
digital age, connecting people and enabling the exchange of ideas. However, the
challenges and risks associated with these platforms necessitate the implementation
of specific laws and regulations.
II.
The Challenges of Unregulated Social Media
Unregulated social media use can pose
a national security risk when it is used to spread propaganda, organize
violence, or when user data is compromised. As a private business, social media
companies have little incentive to enforce conduct and security measures when
controversy grows user ship. In order to
protect user data and enforce cyber security, the government should regulate
social media platforms, especially those owned by overseas companies[4]. Unregulated
social media platforms often gather vast amounts of personal data from users
without providing adequate transparency or obtaining explicit consent. This
raises concerns about privacy infringements and the protection of user data.
Users may be unaware of how their personal information is being collected,
stored, and used by these platforms. Additionally, the potential for data
breaches and unauthorized access to personal data poses a significant risk to
individuals' privacy and can lead to identity theft or other malicious
activities. Most agree that obviously illegal content should be regulated;
child pornography, direct and specific threats, violent extremist videos, and
the like. The need to regulate anything that is not obviously illegal, however,
is unclear and must be scrutinized. The government argues that unregulated
social media promotes misinformation, hate speech, defamation, threats to
public order, terrorist incitement, bullying, and anti-national activities.
While this may be true, it is unclear why content regulation is the answer, if
the content is not obviously illegal. In addition, social media is simply a
platform for expression; making the platform liable for content, even if it is
obviously illegal, makes only as much sense as making a transporter,
restaurateur, or a cellphone network provider liable for content of any
discussion that used their infrastructure[5].
Here is an example of how unregulated
social media platforms can infringe on user privacy:
·
Facebook: Facebook
collects a wide range of personal data from its users, including their names,
email addresses, phone numbers, birthdates, location data, and even their
facial scans. This data is used to target users with advertising, but it can also
be used for other purposes, such as tracking users' online activity or selling
their data to third parties.
·
Instagram:
Instagram also collects a lot of personal data from its users, including their
photos, videos, and location data. This data is used to target users with
advertising, but it can also be used to track users' online activity or sell
their data to third parties.
·
TikTok: TikTok
is a newer social media platform, but it has already been accused of collecting
a lot of personal data from its users, including their videos, location data,
and even their voice recordings. This data is used to target users with
advertising, but it can also be used for other purposes, such as tracking
users' online activity or selling their data to third parties.
In each of these cases, the social
media platforms are not providing adequate transparency about how they are
collecting and using user data. They are also not obtaining explicit consent
from users before collecting this data. This raises concerns about the privacy
of users and the potential for their data to be used for malicious purposes. In
addition to the privacy concerns mentioned above, unregulated social media
platforms also pose a risk of data breaches. In 2018, Facebook experienced a
massive data breach that exposed the personal data of over 50 million users.
This breach could have allowed hackers to steal users' identities, financial
information, and other sensitive data. The potential for data breaches and
unauthorized access to personal data is a serious threat to user privacy. It is
important for social media platforms to take steps to protect user data and to
be more transparent about how they are collecting and using this data.
Unregulated social media platforms
pose significant risks to individuals' privacy and data protection due to
several factors:
·
Lack of transparency: Unregulated platforms often fail to provide clear and transparent
information about their data collection practices. Users may not be fully aware
of what personal information is being collected, how it is being stored, and
how it is being used by these platforms. This lack of transparency makes it
challenging for individuals to make informed decisions about sharing their
personal data and understanding the potential consequences.
·
Inadequate consent mechanisms: Unregulated platforms may not obtain explicit and informed
consent from users before collecting their personal data. Consent is crucial in
ensuring that individuals have control over their personal information and
understand the purposes for which it will be used. Without proper consent
mechanisms, users may unknowingly share their data or may not have the
opportunity to opt out of certain data collection practices.
·
Data breaches and security vulnerabilities: Unregulated platforms may not have
robust security measures in place to protect user data from data breaches or
unauthorized access. These platforms often store vast amounts of personal
information, including names, email addresses, location data, and even
sensitive information such as political views or personal preferences. In the
event of a data breach, this information can be exposed, leading to identity
theft, financial fraud, or other malicious activities.
·
Data sharing with third parties: Unregulated platforms may share user data with third-party
entities without sufficient safeguards or user consent. This data sharing can
occur for targeted advertising purposes or other commercial interests. The lack
of regulation can result in the misuse or exploitation of personal data by
these third parties, further compromising user privacy.
·
Lack of user control and recourse: Unregulated platforms may not provide users with adequate
control over their personal data or avenues for recourse. Users should have the
ability to access, modify, or delete their personal information as well as
understand how it is being used. Additionally, in the absence of regulations,
users may not have effective mechanisms for reporting privacy infringements or
seeking redress for any misuse of their data.
The unregulated nature of social
media platforms has contributed to the rapid spread of misinformation and fake
news. With limited fact-checking mechanisms and the ease of sharing content,
false or misleading information can quickly go viral, reaching a wide audience.
This poses a threat to public discourse, democratic processes, and the
formation of informed opinions. The proliferation of misinformation can have
serious consequences, such as influencing election outcomes, exacerbating
social divisions, and even impacting public health during crises. The
unregulated nature of social media platforms has created an environment where
misinformation and fake news can rapidly spread and have significant
consequences such as:
·
Limited fact-checking mechanisms: Unlike traditional media outlets, social media platforms
often lack robust fact-checking mechanisms. This means that false or misleading
information can be easily disseminated without undergoing proper scrutiny or
verification. This lack of fact-checking allows misinformation to proliferate
unchecked, as there are no reliable mechanisms in place to filter out
inaccurate content.
·
Ease of sharing content: Social media platforms are designed for seamless and rapid
sharing of information. This ease of sharing allows misinformation to spread
quickly and widely. With just a few clicks, users can repost or retweet
content, making it accessible to their entire network of followers or friends.
As a result, false or misleading information can reach a vast audience within a
short period, amplifying its potential impact.
·
Virality and wide reach: Social media platforms have the power to make content go
viral, thanks to algorithms that prioritize engagement and popularity. This
means that content with sensational or controversial elements is more likely to
gain attention and be widely shared. Misinformation and fake news often contain
these attention-grabbing elements, making them more likely to go viral and
reach a broad audience. This rapid spread of misinformation poses a threat to
public discourse, as false narratives can gain traction and influence public
opinion.
·
Threats to democratic processes: Misinformation and fake news can undermine democratic
processes by distorting public opinion and manipulating elections. Inaccurate
information can sway voters' decisions, misrepresent candidates or political
parties, and create divisions within society. By spreading false narratives,
misinformation can disrupt the democratic exchange of ideas, hinder informed
decision-making, and erode trust in institutions.
·
Exacerbation of social divisions: The spread of misinformation on social media can exacerbate
social divisions and polarize communities. False information can fuel existing
biases, stereotypes, or prejudices, leading to increased hostility and
animosity between different groups. This can further deepen societal rifts and
hinder constructive dialogue or understanding.
·
Impact on public health: Misinformation on social media platforms can have severe
consequences for public health, especially during crises or pandemics. False or
misleading information about treatments, prevention measures, or the nature of
the disease can jeopardize public health efforts and undermine trust in
scientific consensus. This can lead to non-compliance with health guidelines,
the spread of myths or conspiracy theories, and increased risks to individuals'
well-being.
The unregulated nature of social
media platforms enables the rapid spread of misinformation and fake news,
posing significant risks to public discourse, democratic processes, and the
formation of informed opinions. It can influence election outcomes, deepen
social divisions, and impact public health during crises. The lack of effective
regulation on social media platforms has allowed the proliferation of hate
speech and online harassment. Users can easily disseminate discriminatory,
racist, or offensive content, which can lead to the marginalization and harm of
targeted individuals or communities. Online platforms have become breeding
grounds for hate speech, cyberbullying, and trolling, creating a toxic
environment that inhibits free expression and negatively impacts individuals'
mental well-being. Unregulated social
media platforms have enabled the rise of cyberbullying, which involves the harassment,
intimidation, or targeted abuse of individuals online. Cyberbullying can have
severe psychological and emotional effects on victims, leading to anxiety,
depression, and even suicidal tendencies. The anonymity and distance afforded
by social media platforms make it easier for perpetrators to engage in such
harmful behavior, often without facing consequences. Without proper regulation
and enforcement, addressing cyberbullying becomes increasingly challenging. Unregulated social media platforms can
inadvertently amplify harmful content, including violence, self-harm, and
explicit or graphic material. This poses a significant risk, especially to
vulnerable individuals, such as children and those with mental health issues.
Without appropriate regulations, harmful content can easily circulate and
negatively impact individuals' well-being, perpetuating a culture of harm and
exploitation. The lack of regulation on social media platforms has given rise
to several challenges. Privacy infringements, the spread of misinformation,
hate speech, online harassment, cyberbullying, and the proliferation of harmful
content are some of the significant issues that require attention
III.
Evaluation of Existing Legal Frameworks:
The proliferation of social media
platforms and their associated challenges have prompted the need for legal
frameworks to regulate their operations and address the risks they pose.
Existing legal frameworks vary across jurisdictions, with some countries
implementing specific laws and regulations targeting social media platforms,
while others rely on broader legislation to govern online activities.
Evaluating the effectiveness of these existing legal frameworks is crucial to
determine their adequacy in addressing the challenges posed by social media platforms.
1.
Nepal:
In Nepal, the Electronic Transactions
Act, 2063 (2008) serves as the primary legislation governing electronic
transactions, including social media. Additionally, the Nepal Information
Technology Act, 2061 (2004), and its accompanying rules also cover certain
aspects related to social media usage. These laws aim to regulate online
activities, ensure data security, and protect individuals from cybercrimes.
2.
India:
In India, the Information Technology
Act, 2000 (IT Act) is the primary legislation governing online activities,
including social media. The IT Act provides a legal framework for electronic
transactions, cybercrime, and intermediary liability. The Information
Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
recently introduced guidelines specifically for social media platforms and
digital media entities, outlining their responsibilities and obligations.
3.
Asia-Pacific countries:
Different countries in the
Asia-Pacific region have developed their own legal frameworks to regulate
social media platforms. These frameworks aim to address various aspects related
to social media, including privacy protection, cybersecurity, and information
dissemination. Here are some examples of legal frameworks in the region:
·
In
Singapore, social media platforms are governed by laws such as the Broadcasting
Act and the Computer Misuse and Cybersecurity Act[6].
These laws provide provisions to regulate online content and protect against
cyber threats.
·
Australia
has implemented a range of laws relevant to social media, including the
Australian Privacy Act, Telecommunications Act, and Criminal Code Act[7]. These
laws address issues such as privacy protection, telecommunications regulation,
and criminal offenses related to online activities.
·
Japan
has enacted the Act on the Protection of Personal Information and the Act on
the Limitation of Liability for Damages of Specified Telecommunications Service
Providers[8].
These laws focus on safeguarding personal data and imposing liability on
telecommunications service providers for certain offenses.
·
South
Korea has the Personal Information Protection Act and the Act on Promotion of
Information and Communications Network Utilization and Information Protection[9].
These laws aim to protect personal information and regulate the use of
information and communications networks.
·
China
has introduced the Cybersecurity Law and the Decision on Strengthening Online
Information Protection[10].
These regulations emphasize cybersecurity measures and the protection of online
information within the country.
Despite the existing legal
frameworks, there are several limitations and gaps in regulating social media
platforms across Asia-Pacific countries. These limitations include:
1. Inconsistent Definitions: Different countries may define
social media and related terms differently, leading to inconsistencies in
regulation. This lack of standardized definitions can make it difficult to
effectively regulate social media platforms and address specific issues and
concerns.
2. Jurisdictional Challenges: Social media platforms often
operate internationally, making it challenging to enforce regulations that are
limited to national borders. These platforms can be subject to different legal
jurisdictions, further complicating the enforcement of regulations and ensuring
compliance with local laws.
3. Limited Accountability: Intermediary liability protections,
such as safe harbor provisions, may shield social media platforms from legal
consequences for the actions of their users. This limited accountability can
make it challenging to hold platforms responsible for content that may be
harmful, defamatory, or infringing on the rights of others.
4. Lack of Comprehensive Privacy Laws: Some countries have yet to
establish comprehensive privacy laws that adequately address the complexities
of social media and protect user data. This gap in legislation may leave users
vulnerable to privacy infringements and data misuse by social media platforms.
5. Technological Advancement: Rapid advancements in technology
often outpace regulatory frameworks, making it challenging to keep up with
emerging issues and concerns. New features, data collection practices, and
algorithms used by social media platforms may raise privacy and ethical
concerns that existing regulations may not cover adequately.
Some of the Case studies:
1. Facebook-Cambridge Analytica Scandal
(Global): The
unauthorized harvesting and misuse of personal data of millions of Facebook
users by Cambridge Analytica raised concerns about user privacy, data
protection, and the role of social media platforms in ensuring data security.
This case highlighted the need for tighter regulations and increased
transparency in handling user data.
2. WhatsApp Traceability Issue (India)[11]:
The Indian government sought traceability of messages on WhatsApp to
counter fake news and hate speech. WhatsApp argued that enabling traceability
would undermine its end-to-end encryption and compromise user privacy. This
case highlighted the challenges in balancing user privacy and national security
concerns.
3. Myanmar's Facebook Role in Rohingya
Crisis (Myanmar):
Facebook faced scrutiny for its role in the spread of hate speech and
misinformation that contributed to the Rohingya crisis in Myanmar. This case
raised questions about the responsibility of social media platforms in
preventing the misuse of their platforms for harmful purposes.
These case studies illustrate the
legal challenges faced in regulating social media platforms, including issues
related to privacy, data protection, intermediary liability, and the impact of
social media on societal conflicts.
IV.
Global Perspectives on Social Media Law
Social media laws vary across
different countries, reflecting their cultural, political, and legal contexts.
A comparative analysis of social media laws highlights various approaches and
considerations. Some countries adopt a more regulatory approach, imposing
strict regulations and penalties on social media platforms, while others prioritize
self-regulation and industry standards.
For example, the European Union's
General Data Protection Regulation (GDPR) provides a comprehensive framework
for protecting user data and privacy. It emphasizes consent, transparency, and
user rights. In contrast, the United States' approach is guided by the First
Amendment, which protects freedom of speech, limiting regulations on social
media platforms.
1. Regulatory Approach: Some countries, like Germany and
France, adopt a more regulatory approach to social media laws. They impose
strict regulations and penalties on social media platforms to ensure compliance
with content moderation, hate speech, and data protection standards.
2. Self-Regulation: Other countries, such as the United
States, prioritize self-regulation and industry standards. In the U.S., the
approach is guided by the First Amendment, which protects freedom of speech,
limiting regulations on social media platforms. These countries rely on
platforms to develop and enforce their own content policies and moderation
standards.
3. Comprehensive Privacy Frameworks: The European Union's General Data
Protection Regulation (GDPR) provides a comprehensive framework for protecting
user data and privacy. It emphasizes consent, transparency, and user rights,
placing obligations on social media platforms to handle personal data
responsibly.
4. Balancing National Security and Free
Speech: Countries
like Australia and Singapore have introduced laws that aim to strike a balance
between national security concerns and freedom of speech. These laws require
social media platforms to remove or restrict access to content that incites
violence, promotes terrorism, or spreads extremist ideologies.
Balancing freedom of speech with the
need to protect users from online harm is a critical challenge in social media
regulation. It requires careful consideration and a nuanced approach such as:
1. Freedom of Speech: Many countries recognize freedom of
speech as a fundamental right and seek to protect and uphold this right, allowing
for a wide scope of expression on social media platforms.
2. Harmful Content: At the same time, countries are
increasingly concerned about harmful content, including hate speech,
disinformation, and incitement to violence. They develop policies and
regulations that require social media platforms to address and remove such
content to protect users.
3. Content Moderation: Determining what constitutes
harmful content and implementing effective content moderation policies is an
ongoing challenge. Striking the right balance between protecting users and
avoiding undue censorship remains a complex task.
Challenges in social media regulation
include jurisdictional boundaries, technological advancements outpacing
regulations, balancing conflicting interests, and ensuring platform
accountability. Governments and regulators need to continuously review and
update their approaches to address these challenges effectively.
V.
The Role of Social Media Platforms in Content
Moderation :
Social media platforms have a responsibility
to ensure that their platforms are safe for users. This includes protecting
users from harmful content, such as hate speech, violence, and child sexual
abuse content. It also includes protecting users from being scammed or
defrauded.
Social media platforms can fulfill
their responsibility to ensure a safe online environment by:
·
Having clear and transparent content moderation policies. These policies should outline what
types of content are not allowed on the platform and how content will be moderated.
·
Using automated and manual content moderation tools. Automated tools can help to
identify and remove harmful content quickly. Manual tools can be used to review
content that is flagged by users or that is not detected by automated tools.
·
Having a process for users to report harmful content. Users should be able to report
harmful content easily and should be confident that their reports will be taken
seriously.
·
Working with law enforcement to investigate and prosecute cases of
harmful content.
Social media platforms should work with law enforcement to investigate and
prosecute cases of harmful content. This helps to deter people from posting
harmful content and to hold them accountable for their actions.
The content moderation policies and
practices of social media platforms vary. Some platforms have more strict
policies than others. Some platforms use more automated tools than others. However, there are some common content
moderation policies that are used by most social media platforms. These include:
·
Hate speech:
Hate speech is content that attacks a person or group on the basis of their
race, religion, ethnicity, national origin, sexual orientation, gender
identity, or disability.
·
Violence:
Violence is content that depicts or promotes violence.
·
Child
sexual abuse content: Child sexual abuse content is content that depicts or
promotes the sexual abuse of children.
·
Spam: Spam is
content that is unsolicited or irrelevant.
·
Harassment:
Harassment is content that is intended to intimidate, threaten, or bully
another person.
·
Social
media platforms also use a variety of automated and manual tools to moderate
content. Automated tools can help to identify and remove harmful content
quickly. Manual tools can be used to review content that is flagged by users or
that is not detected by automated tools.
Transparency and accountability are
vital aspects of content moderation by social media platforms. There are
ongoing discussions regarding the need for platforms to be transparent about
their content moderation practices, including the criteria used for
decision-making and the handling of appeals. Users and stakeholders seek
greater clarity on how platforms enforce their policies, address biases, and
handle controversial or sensitive content. Additionally, calls for increased
accountability focus on holding platforms responsible for the consequences of
their content moderation actions and ensuring a fair and consistent approach.
VI.
Importance of Comprehensive Social Media Legislation:
Social media platforms have become an
integral part of our lives, and they collect a vast amount of personal data
about us. It is important to have comprehensive legislation in place to protect
our privacy and other rights in the digital realm. Social media platforms can
be used to spread harmful content, such as hate speech and child sexual abuse
content. It is important to have comprehensive legislation in place to protect
vulnerable groups from this type of content. Social media platforms can be used
to bully, harass, and spread misinformation. It is important to have
comprehensive legislation in place to promote responsible behavior online.
Social media platforms can be used to manipulate public opinion and interfere
with elections. It is important to have comprehensive legislation in place to
uphold the integrity of online communication.
The following are some of the key
areas that comprehensive social media legislation should address:
a) Privacy and data protection: This should include provisions to
protect users' personal data, such as their names, contact information, and
location data.
b) Freedom of expression: This should include provisions to
protect freedom of expression, but also to prevent the spread of harmful
content.
c) Content moderation: This should include provisions to
ensure that social media platforms are effectively moderating content that is
harmful or illegal.
d) Accountability: This should include provisions to
hold social media platforms accountable for the content that is posted on their
platforms.
Comprehensive social media
legislation is vital for safeguarding individuals' rights, protecting
vulnerable groups, promoting responsible behavior, and upholding the integrity
of online communication.
VII.
Conclusion:
The significance of comprehensive
social media legislation cannot be overstated in the digital age. Such
legislation is essential for safeguarding individuals' rights, protecting
vulnerable groups, promoting responsible behavior, and upholding the integrity
of online communication. It establishes clear boundaries and guidelines for
social media platforms, ensuring that fundamental rights such as freedom of
expression and privacy are respected and upheld. Through provisions that combat
misinformation, hate speech, cyberbullying, and other harmful practices,
legislation fosters a safe and inclusive digital environment. Moreover,
comprehensive social media legislation promotes accountability, transparency,
and responsible conduct among platform operators and users alike. By addressing
the unique challenges and risks associated with social media platforms,
legislation paves the way for a more responsible, equitable, and trustworthy
digital landscape. Ultimately, a comprehensive legal framework is indispensable
in shaping the future of social media, empowering individuals, and ensuring the
benefits of these platforms are harnessed responsibly and ethically.
[1] Advocate, Supreme Court of Nepal;
Assistant Professor and chief of student welfare of Nepal Law campus, Tribhuvan
University, Exhibition Road, Kathmandu, Nepal.
[2] Investopedia. (2023, March 8). Social Media:
Definition, Effects, and List of Top Apps. Investopedia. Retrieved from https://www.investopedia.com/terms/s/social-media.asp
[3] Elrashidy, Y. (2023, February 17). The Power of
Social Media: Connecting and Engaging in the Digital Age. Times of India Blog.
Retrieved from
https://timesofindia.indiatimes.com/readersblog/elrashidy-media-group/the-power-of-social-media-connecting-and-engaging-in-the-digital-age-50585/
[4] Pfeiffer Law. (2023, February 25). The Risk of
Unregulated Social Media. Pfeiffer Law Blog. Retrieved from https://www.pfeifferlaw.com/entertainment-law-blog/the-risk-of-unregulated-social-media
[5] ORF Online. (2023, January 20). Government Should Not
Regulate Social Media. ORF Online. Retrieved from
https://www.orfonline.org/expert-speak/government-should-not-regulate-social-media-57786/
[6] Cyber Security Agency of Singapore. (2018).
Cybersecurity Act. Retrieved from https://www.csa.gov.sg/legislation/Cybersecurity-Act
[7] Attorney-General's Department. (1979).
Telecommunications (Interception and Access) Act 1979. Retrieved from https://www.ag.gov.au/crime/telecommunications-interception-and-surveillance
[8] Japan. (2019). Telecommunications Business Act.
Retrieved from https://www.japaneselawtranslation.go.jp/en/laws/view/3610/en
[9] Korea Communications Commission. (2022, June 27).
Presentation on data protection in Korea. Retrieved from https://rm.coe.int/presentation-data-protection-in-korea-korea-communications-commission/16808c1fea
[10] Protiviti. (2023, March 11). Data privacy consulting.
Retrieved from https://www.protiviti.com/hk-en/data-privacy-consulting
[11] https://indianexpress.com/article/explained/whatsapp-india-it-rules-traceability-clause-case-explained-7331039/