Quantcast
Channel: Cyberbullying Research Center

Cyberbullying Fact Sheet: Identification, Prevention, and Response

$
0
0

UPDATED for 2023! This detailed guide is a nine-page summary – filled with as much useful information as possible – to equip educators, parents, and other youth-serving adults to spot cyberbullying, respond to it appropriately and meaningfully, and to prevent its future occurrence among those they care for. If you only have time to read one fact sheet from the Cyberbullying Research Center to get up-to-speed about the problem and what you can do, read this one.

Hinduja, S. & Patchin, J. W. (2023). Cyberbullying fact sheet: Identification, Prevention, and Response. Cyberbullying Research Center. Retrieved [insert date], from https://cyberbullying.org/Cyberbullying-Identification-Prevention-Response-2023.pdf

Download PDF

(NOTE: We have a much older version of this Cyberbullying fact sheet available here, in case you were looking for it or had linked to it from another site: https://cyberbullying.org/Cyberbullying-Identification-Prevention-Response.pdf)

Cyberbullying-Identification-Prevention-Response-2014

The post Cyberbullying Fact Sheet: Identification, Prevention, and Response appeared first on Cyberbullying Research Center.


Takeaways from the Surgeon General’s Advisory on Social Media

$
0
0

U.S. Surgeon General Dr. Vivek Murthy issued an advisory earlier this year entitled “Social Media and Youth Mental Health.” In this public statement, Dr. Murthy summarizes the (limited and often methodologically shaky) research related to adolescents use of social media. His primary focus is on mental health, though other aspects of child wellbeing are also discussed (including cyberbullying, exposure to sexual content, and sleep problems). His research question is one he has increasingly heard from parents: “Is social media safe?” The answer, as you can imagine, is rather complicated.

Surgeon General Murthy begins his remarks with an important caveat: “robust independent safety analyses on the impact of social media on youth have not yet been conducted.” When referring to the research that has been done, Dr. Murthy is careful to use language that highlights the uncertainty of the results: “Frequent social media use may be associated with distinct changes in the developing brain…” “…adolescents may experience heightened emotional sensitivity…” “…the introduction of the social media platform may have contributed to…depression.” The biggest problem is that much of this research is correlational, relying on the observation that increases in teen depression, anxiety, and suicidality have occurred alongside increases in screen time and social media use. It’s undeniable and tragic that more teens are struggling today than even just a decade ago. If you look back even further, though, you’d see that teen rates of suicide and suicidal thoughts were even higher in the 1990s. Nevertheless, adolescent mental health is an important contemporary social problem warranting our investigation.

Dr. Murthy also points out the potential benefits of social media, including the ability of youth to connect with others and to explore diverse interests. Indeed, 80% of teens have reported “feeling more connected to what is going on in their friends’ lives” when using social media. Two-thirds say they like that they “have people who can support them through tough times.” Moreover, social media can offer assistance to youth searching for help on a variety of adolescent challenges, and can be especially beneficial to marginalized youth (racial, ethnic, or sexual/gender minorities) who may not have adequate support systems in their schools or communities.

The biggest problem is that much of the research is correlational, relying on the observation that increases in teen depression, anxiety, and suicidality have occurred alongside increases in screen time and social media use.

None of this has stopped pundits from pointing to this communique as proof that social media should be prohibited for minors, citing it as an “extraordinary public warning.” An NBC News headline declares “Social media is driving teen mental health crisis, surgeon general warns.” That is not only a stretch, it is inflammatory and incites a mentality of panic among adults looking to help youth navigate the risks and benefits of tech. Let me put it this way: There are a lot of bad things that happen at school every day, including exclusion, bullying, assaults, and worse. If we focus exclusively on just those bad things, what parent would want to send their child to school?

It is important to remember that the vast majority of teens use social media. According to a 2022 survey by the Pew Research Center, 62% of 13 to 17-year-olds use Instagram, 59% use Snapchat, and about a third remain on Facebook. About half of teens surveyed say they go onto these platforms daily. If you include video sharing services like YouTube and TikTok, at least 95% of teens participate in social media. The Surgeon General referred to a study finding that “adolescents who spent more than 3 hours per day on social media faced double the risk of experiencing poor mental health outcomes.” (Notably, the data analyzed in that particular study is 7-10 years old. That is an eternity when it comes to technological trends!) Given the widespread adoption of social media and the fact that the average teen today spend nearly 9 (NINE!) hours per day on screens–much of that time on social media–it can be concluded that most youth have positive experiences on these platforms. If social media truly had a direct negative effect on child well-being universally, many more children would be having problems.

All of that said, there is emerging evidence that certain types of social media use can have negative effects on mental health, sleep, and lead to related problematic outcomes like feelings of social isolation for some kids. But teens and young adults aren’t ignorant of these concerns. Indeed, many of the teens I speak with acknowledge this and know that social media can suck them in, and can suck in general sometimes. More than a third of teens (36%) told the Pew Research Center that they “spend too much time on social media” and about one in ten said social media has a negative effect on them. If we agree that social media does have benefits for many teens (especially marginalized youth), we need to create opportunities for teens to regulate their social media experiences to be more positive.

Surgeon General Murthy concludes his statement with “At this time, we do not yet have enough evidence to determine if social media is sufficiently safe for children and adolescents.” That may be true, but do we have enough evidence to determine it is unsafe? And if we do, are there steps that can be take to minimize the likelihood of harm and amplify the positive experiences? Beyond altogether banning social media (Sameer will discuss efforts to do so, and their limitations, in a new post next week), what can be done to make adolescent experiences on social media better?

First, parents should monitor the amount of time their child is on social media and apply appropriate limits. There’s no one-size-fits-all number that all families need to adhere to (no more than 3 hours per day!). Rather, parents should be reasonable and be on the lookout for any problems that might be connected to overuse, such as not getting enough sleep or not completing schoolwork. Second, parents should make sure their children are aware of the tools available to them on most platforms to make their experiences more enjoyable. For example, they can block and report users who are being hurtful and unfollow accounts that don’t bring them joy. Parents should also encourage them to stand up for—and be supportive of—others online. They should cultivate and lead an online community of positivity and kindness that others will want to be a part of.

Finally, and perhaps most importantly, parents need to cultivate an open line of communication where their children are willing to come to them should they run into trouble. Many of the problems encountered online could be managed with the help of a sympathetic and understanding adult. The idea is to give teens the knowledge and tools they can use to control their social media experience. Educate them on the potential mental health implications of constantly comparing one’s self to what they see on social media. Show them how to use platform tools like blocking, reporting, muting, or unfollowing to create a positive experience. And be there to help them should something come up.

Image: Eliott Reyna (unsplash)

The post Takeaways from the Surgeon General’s Advisory on Social Media appeared first on Cyberbullying Research Center.

State Laws, Social Media Bans, and Youth: What Are We Doing?

$
0
0

There has been a flurry of activity related to new legislation intending to make social media and gaming platforms safer and more accountable to upholding expected standards of trust, security, transparency, and privacy. These laws are being proposed because of continued concern of possible ill effects of popular platforms on the well-being of young people. While an objective look at the research base provides a complex picture of mixed findings related to the positives and negatives of social media use, many legislators realize that this is a topic of great concern to families across the United States and accordingly want to do something about it.

In late May 2023, U.S. Surgeon General Dr. Vivek Murthy issued an urgent call for action by all stakeholders to deeply understand the impact of social media on youth mental health.  I think that is incredibly necessary because the possibilities of harm are myriad, and it is likely that they have a compounding effect. And, that is our wheelhouse – exactly where we do most of our research, advocacy, and training in equipping schools, NGOs, and corporations to protect minors and build healthy online communities.

However, when I survey the landscape of the laws that are being proposed or passed around the nation, I am concerned that deep understanding has not taken place. I am concerned that politicians are not interfacing with online safety experts from a multitude of disciplines to gain a nuanced picture of the issues at hand. I am concerned that an antagonistic approach towards platforms will cause progress to sputter, and that what is needed is a cooperative partnership where goals can be achieved in as mutual a manner as possible. I am concerned that most lawmakers have a very shallow and incomplete appreciation not only of what the researchbase says (even Dr. Murthy acknowledges much uncertainty in the extant research findings), but also of the feasibility of what they suggest platforms should do.

I am concerned that politicians are not interfacing with online safety experts from a multitude of disciplines to gain a nuanced understanding of the issues at hand. I am concerned that an antagonistic approach towards platforms will cause progress to sputter, and that what is needed is a cooperative partnership where goals can be achieved in as mutual a manner as possible.

Let’s talk about some state legislation which is built upon the cornerstone of age restrictions.  For example, Utah would make social media platforms off-limits to children ages 15 and younger. Similarly, a bill introduced in Texas would ban anyone under the age of 18 from using social media. In Louisiana, those under 18 apparently cannot have access to any sort of social media platform without express parental approval. How exactly is this going to happen? How will this be enforced on a practical level? Shouldn’t there be conversation about the rights of a young person to exercise their freedom of speech and expression online? Is it possible that depriving them of access is a human rights violation, as has been articulated by a prestigious international committee organized by the United Nations?

Without collecting a great deal of personally identifiable information that is ripe for exploitation, I don’t understand how these limites will be enforced. Currently, the major platforms rely on the honor system and trust that the age a user inputs upon signup is truly their age. Even if each started to require photo or video selfies and/or the uploading of a government ID or to cooperation with a commercial age verification system, there exist methods to circumvent or bypass the gateway. Indeed, industry for years has tried to figure out age-verification solutions with the least amount of friction and the most amount of user-friendliness to actually catch on and not be a deterrent to use. Biometric factors such as using specific regions of human speech bandwidth can be easily circumvented using a recording, while face recognition requirements can be bypassed by using a photo of an adult. Fingerprint or iris verification requires specialized hardware. The privacy concerns associated with the collection of all of these biometric markers are also significant.

Additionally, some laws (like in Arkansas) do not apply equally to all platforms. Much of this seems arbitrary or even sinister. For instance, companies that are mostly about gaming are exempt (those that “exclusively offers interacting gaming, virtual gaming, or an online service, that allows the creation and uploading of content for the purpose of interacting gaming”). So are those that make less than 25% of their revenue from social media, and those that provide cloud storage (What? Why? So random). It is curious that a co-sponsor of this bill specifically stated that the goal of the legislation is to  “to empower parents and protect kids from social media platforms, like Facebook, Instagram, TikTok, and Snapchat.” What is curious is that an amendment to the law was filed recently that excludes any “social media company that allows a user to generate short video clips of dancing, voice overs, or other acts of entertainment.” Wait a second: Facebook, Instagram, TikTok, and Snapchat each allow and encourage that exact type of content to be created. This doesn’t make sense. What – and whose – interests are being served here?

Another law from Utah seeks to prohibit a social media company from using a design or feature that “causes a minor to have an addiction to the company’s social media platform.” What does this even mean? How do we define “addiction”? My four-year old keeps coming back to his Legos. Does not every toy manufacturer design products that induce a pleasurable neurobiological reaction in a child’s brain? What is the role of the user – even if they are a teenager – in developing personal agency and self-control, and in taking advantage of the screentime restrictions available within the app or even on their device to help facilitate self-control? What is the role of parents and guardians in meaningfully shepherding and restricting (over)use much as they would restrict anything else? Should we then not also ban Netflix from contributing to binge-watching? Lay’s Potato Chips from betting that we cannot eat just one? Nike for making so many new versions of Air Jordans for the sneakerheads among us? What is the culpability of hardware manufacturers who sell wearables that keep us tethered to technology? Why aren’t these questions being asked when something as major as a law that affects tens of millions of people is being proposed?

What is the role of the user – even if they are a teenager – in developing personal agency and self-control, and in taking advantage of the screentime restrictions available within the app or even on their device to help facilitate self-control? What is the role of parents and guardians in meaningfully shepherding and restricting (over)use much as they would restrict anything else?

Legislation from Utah also attempts to impose a social media curfew that blocks online access of children from 10:30pm to 6:30am unless their parents adjust that range. Why do we need legislation for this? Can we not just ask parents to be parents? This is also not enforceable because of how easy it is to use proxies and VPNs, switch time zones within devices, and use services for network provision outside of those monitored. Furthermore, parents have a hard time even using the safety features and controls that device manufacturers and platforms already provide to them to safeguard their children, and now we are asking them to do yet one more thing? They are going to be tired of hearing their teen tell them he is not yet done with his homework at 10:30pm, and just be done with this restriction.

While I know this is not patently true, it seems like many legislators got together one morning with coffee and donuts, rallied around some alarmist sentiments they heard somewhere, engaged in a good amount of gnashing of teeth and pearl clutching, decided they must demonstrate action to remain relevant to their constituency, and came up with some feel-good, one-size-fits-all solutions before the breakfast food ran out. Unfortunately, a careful understanding of the complex issues at hand – and the feasibility of application and enforcement of their proposals – remains glaringly missing.

One of my biggest concerns is as follows: What is done with the identification data once it is used to verify identity? The dustbin of history is littered with examples of privacy violations where major platforms have mishandled personal data that they have been entrusted with. This is also to say nothing about violations by third-party entities, nor by intentional hacks or other forms of inappropriate data tracking and harvesting. Some of these laws attempt to punish companies that collect information from their users that does not pertain to age verification of the account. Sanctions range from $100 per violation in Wisconsin to $5,000 per violation in Utah. How is this going to be proven and enforced? Oh, by the way, Utah’s bills also give parents full access to their children’s online accounts – including their private messages. If the goal is healthier families, better parent-child relationships, and thriving teenagers, I’m not sure we’ve put sufficient thought into this idea.

At this point, I’d like to draw your attention to a neat resource over at Tech Policy Press created by Tim Bernard. He created a spreadsheet in the Spring of 2023 that lists 144 different bills introduced across 43 states focused on protecting children from Internet-based harms. Many (but not all) seem like knee-jerk responses based on a poor understanding of numerous interrelated factors that must be considered when identifying and proposing solutions to the problems at hand. Of course, I am in favor of those that call for increased education as part of the curriculum, or which require anonymous reporting systems, or which champion the importance of building positive school climates and cultivating soft skills (like social and emotional learning approaches, restorative practices, digital citizenship, and media literacy).

Is this legislation being proposed because we are not willing to look at ourselves in the mirror when it comes to the behaviors we ourselves model, the social environments we create offline, the level and quality of involvement we have in the lives of youth, and the amount of effort it truly takes to support a kid these days?

As I close, what is the point with all of this legislation? Is it to encourage further conversation and partnerships among the major stakeholders to put their heads down and develop responsibilities and strategies for the various sectors they represent (including at home and in schools)? Is it lip service to tickle the ears of a morally panicked citizen base whose primary perspectives about most issues are sourced from sensationalistic news stories and clickbait articles? Is it to bring the hammer down on the 800 pound gorillas of industry because they are easy to scapegoat, and everyone can agree they should do “more” but can’t present realistic solutions? Is it because we are not willing to look at ourselves in the mirror when it comes to the behaviors we ourselves model, the social environments we create offline, the level and quality of involvement we have in the lives of our youth, and the amount of effort it truly takes to support a kid these days? Finally, can anyone point to any research that demonstrates that these types of laws actually make a measurable difference in enhancing youth safety and well-being? Anywhere? In the world? Or are we just throwing gummy worms at the wall, hoping that at least one of them will stick?

Let me be clear: Platforms have to do more. Much more. We continue to pound the table for novel policies, programming, in-app safety features, educational initiatives, messaging campaigns, content moderation methods, and reporting protocols from social media and gaming companies. What is more, they are specifically requesting, heeding, and implementing some of our data-driven insights. But most of these pieces of governmental legislation are not helpful.  Indeed, very, very few are passing and becoming formally codified. Why is that? Largely, I think it is because many lack thoughtfulness and creativity, are lazily constructed (and involve a lot of copying and pasting of language from other proposed laws), and are not based on clear, consistent findings from empirical research. Many legislators are sadly wasting everyone’s time, the government’s resources, and our tax dollars. But most critically, they are failing to truly and meaningfully help the situation as our nation’s youth continue to struggle.

Featured Image: https://tinyurl.com/3va2dk9m

The post State Laws, Social Media Bans, and Youth: What Are We Doing? appeared first on Cyberbullying Research Center.

Social Media, Youth, and New Legislation: The Most Critical Components

$
0
0

In my last piece, I discussed how some legislation in various US states has been proposed without careful consideration of the contributing factors of internalized and externalized harm among youth. More specifically, I expressed concern that the complexities surrounding why youth struggle emotionally and psychologically demand more than simplistic, largely unenforceable solutions. We recognize that these state laws are proposed with the best of intentions and are motivated by a sincere desire to support our current generation of young people. However, they may be more harmful than helpful.

Legislators routinely reach out to us for our input on what we believe should be included in new laws related to social media and youth, and I wanted to share our suggestions here. In an effort to ensure that implementing these elements is feasible, and so that the resultant law(s) are not overly broad, we focus our attention on those we believe are most important.

The Need for Comprehensive Legislation

To begin, we are concerned about increased suicidality, depression, anxiety, and related mental health outcomes related to experiences with cyberbullying, identity-based harassment, sextortion, and use/overuse of social media platforms. Other problems associated with online interactions include the potential for child sexual exploitation and grooming, the exchange of child pornography, and human trafficking – but these occur across a variety of Internet-based environments and may be less specific to social media.

However, prohibitionist approaches have historically failed to work, lack clear scientific backing, are often circumvented, and violate the right to free expression and access to information. For example, Society has spent many years focused on the purported relationship between violent video games and violent behavior, and legislation was created to safeguard children by preventing distribution of video games with certain violent content to minors. However, the courts soon granted injunctions against these state laws (see OK, IL, MI, MN, CA, and LA) in large part because the research was either missing, weak, or inconclusive. Even the APA stated in 2020 that there is “insufficient evidence to support a causal link.” What is more, they articulated that “Violence is a complex social problem that likely stems from many factors that warrant attention from researchers, policy makers and the public. Attributing violence to violent video gaming is not scientifically sound and draws attention away from other factors.” This is how I feel about the relationship between social media and well-being and/or mental health.  As a result, these bans on social media may not be supported by the courts when challenged.

If we can agree, then, that a blanket ban on a particular social media platform is unlikely to prevent the kinds of behaviors we are interested in curtailing, and if we can agree that social media use benefits some people, what legislative elements are likely to have an impact?

Potentially Useful Legislative Elements

When it comes to elements of legislation that we feel could have the greatest positive impact, we have several suggestions. Comprehensive laws should consider including the following components:

  1. Requires third-party, annual audits to assure that the safety and security of minors is prioritized and protected in alignment with a clear, research-established baseline and standard across industry. This will have to be done by a governmental entity, just as national emission standards for air pollutants are prescribed, set, and enforced by the Environmental Protection Agency, national safety and performance standards for motor vehicles by the National Highway Traffic Safety Administration, and national manufacturing, production, labeling, packaging, and ingredient specifications for food safety by the Food and Drug Administration. It will be an extremely difficult task requiring the brightest of minds thinking through the broadest of implications, but we’ve done it before to safeguard other industries. It has to be done here as well, and audits will ensure compliance.

  2. Requires platforms to improve systems that provide vetted independent researchers with mechanisms to access and analyze data while also adhering to privacy and data protection protocols. There are a number of critical public-interest research questions which hold answers that can greatly inform how platforms and society can protect and support users not just in the areas of victimization, but also democracy and public health. The push for transparency is at an all-time high. However, a government agency would need to coordinate this process in a way that protects the competitive interests and proprietary nature of what is provided by companies. In the bloodthirsty demand for platform accountability, I feel like this reality is too easily dismissed. They are private companies that do provide social and economic benefits, and we are trying to find mutually acceptable middle ground. As a researcher myself, I want as much data as possible, and I believe the research questions I seek to answer warrant access given the potential for social good. However, a governmental regulator should decide its merits (and my own). Plus, I should be contractually obligated to abide by all data ethics and protection standards or else face severe penalties by law, as determined by that governmental agency.  Progress made in the EU can serve as a model for how this is done in the US.

  3. Requires the strongest privacy settings for minors upon account creation by default. Minors typically do not make more restrictive their initial privacy settings, and so this needs to be done for them from the start.1, 2

  4. Mandates the implementation of age-verification and necessary guardrails for mature content and conversations. This is one thing that seems non-negotiable moving forward. Research suggests that females between the ages of 11 and 13 are most sensitive to the influence of social media on later life satisfaction while for males it is around 14 and 15. Clearly, youth are a vulnerable population in this regard. For these and related reasons, I am convinced that the age verification conundrum will be solved this decade, even if we have to live through a few inelegant solutions. We know that the EU is working on a standard (see eID and euCONSENT). One might not agree with it, and point to (valid) concerns about effectiveness of the process as well as privacy concerns and the fact that millions in the US do not even have government-issued ID, but I strongly believe the concerns will be overcome soon. And we can learn from the unfolding experiences of other countries. Overseas, the Digital Services Act will apply across the EU by February 17, 2024 and requires platforms to protect minors from harmful content. Companies somehow will have to set and implement the appropriate parameters and protections (and know who minors are) or face fines or other penalties enforced by member states.

  5. Establishes an industry-wide, time-bound response rate when formal victimization reports (with proper and complete documentation and digital evidence) are made to a platform. Many public schools around the US are required to respond to and investigate reports of bullying (or other behavioral violations) within 24 hours (with exceptions for weekends and holidays). To be sure, the volume of reports that platforms receive complicates the matter, but certain forms of online harm will have significant and sometimes traumatic consequences on the target. Research has shown that additional trauma can be caused simply by the incomplete or inadequate response made by trusted authority figures who are contacted for help.3-5 I have written about this extensively, with specific application to the impacts of negligence by social media companies. We know that imminent threats of violence and child sexual exploitation are escalated to the highest level of emergency response immediately. Less serious (based on choice selected in the reporting form and quick evaluation by manual moderators) should still receive a timely and meaningful response, even if this requires more staff and resources, as well as more R&D. Indeed, a legal mandate towards this end should quicken the pace of progress not only in optimizing the workflow of reporting mechanisms, but also preventing the harms in the first place (to thereby decrease the quantity of initial reports received).

  6. Establishes clear and narrow definitions of what constitutes extremely harmful material that platforms must insulate youth from. Research has shown that such exposure to this vulnerable population is linked to lower subjective well-being6 as well as engagement in risky offline behaviors.7 However, a definition cannot include every form of morally questionable content. What it should include without exception is, for example, that which promotes suicide, self-harm, substance abuse, child sexual exploitation material, terrorism or extremism, hate speech, and violence or threats towards others. The platforms youth use already prohibit these forms of content, but what is required is more accountability in enforcing those prohibitions. I believe it is infeasible for companies facilitating the exchange of billions of interactions every day to proactively prevent the posting or sharing of every instance of extremely harmful material across tens or hundreds of millions of youth. That said, though, they should face fines or other penalties for failing to respond promptly and appropriately to formal reports made of their existence.

Given the global nature of social media, it makes more sense to build these elements into a federal law, rather than to attempt to address social media problems piecemeal in different states. As I follow the developments in other countries and consider the momentum built since Frances Haugen’s whistleblower testimony in 2021 I believe federal legislation in the US is now necessary.

Current Federal Legislative Proposals

There are currently a couple of federal proposals focused on safety that have merit, but also hold some concerns. A third bill intended to amend the Children’s Online Privacy Protection Act of 1998 (known as COPPA 2.0) focuses almost exclusively on privacy, marketing, and targeted advertising related to youth and is worth a separate discussion down the road.

To begin, let’s discuss the Kids Online Safety Act. I am on board with the clause that platforms should provide minors with more security and privacy settings (and more education and incentive to use them!). I wonder if parents really do need more controls, though.  Research has shown that parents are already overwhelmed with the amount of safety options they have to work with on platforms, that they want more tools for gaming instead of social media, and that they more heavily lean on informal rules than digital solutions (which I personally think is best!). Also, I formally speak to thousands of parents every year, and many admit to not taking the time to explore the tools are already provided in-app by platforms. I need to hear more about exactly what parental controls are currently missing, and which new ones can add universal value to both youth and their guardians.

I am on board with preventing youth from harmful online content, with the caveats I provided earlier about censoring only the extreme types. Platforms cannot “prevent and mitigate” content related to anxiety and depression, and arguably shouldn’t given its value in teaching and helping information-seeking youth across the nation. I am on board with the clause requiring independent audits for the purposes of transparency, accountability, and compliance. I’m also on board with the clause giving researchers more access to social media data to better understand online harms, but would advocate for a more granular, case-by-case approval process to ensure that proprietary algorithms and anonymized data are properly safeguarded.

However, I am concerned that this law will make it difficult for certain demographics or subgroups of youth to operate freely online. The US Supreme Court has stated that minors have the right to access non-obscene content, and to express oneself anonymously. Furthermore, social media does help youth in a variety of ways (as acknowledged by the US Surgeon General), including strengthening social connections or providing a rich venue for learning, discovery, and exploration. Previously there was concern that KOSA would allow non-supportive parents to have access to the online activities of their LGBTQ+ children. The bill was refined recently to not require that platforms disclose a minor’s browsing behavior, search history, messages, or other content or metadata of their communications. However, it’s not a good look for the lead sponsor of the bill to state in September 2023 that it will help “protect minor children from the transgender in this culture.” I’m not sure how to reconcile this.

Second, we have the Protecting Kids on Social Media Act. It mandates age verification, and I am on board with this because the benefits outweigh its detriments, it is already happening to some extent, and it will eventually become mainstream because of societal and governmental pressure. It just will. The bill bans children under age 13 from using social media; I do not support bans for the reasons mentioned above.

It also requires parental consent before 13- to 17-year-olds can use social media. My stance on this is that if a kid wants to be on social media, a legal requirement for parental consent is not going to be as strong or effective as the parent simply disallowing it and enforcing that rule. Again, regulating a requirement for parental consent before a minor posts a picture or sends a message or watches a short-form video on an app does no better job than laying your own set of “laws” under your roof about what your teen can and cannot do. And if a teen can convince a parent to override their household rules (just talk to the parents around you!), they can also convince them to provide formal consent on a platform they want to use. Regardless, if there is a will, there’s a way – that minor will somehow be able to get access, even if their friend (whose parents don’t care what they do online) creates an account for them. Finally, the Protecting Kids bill mandates the creation of a digital ID pilot program to verify age. As I’ve said, age verification through some sort of digital ID system is going to happen (even if it is a few years away), and so I am on board with this despite the valid concerns that have been shared.

As I close, I cannot emphasize strongly enough that a nuanced discussion of this topic is laden with an endless stream of “if’s, and’s, or but’s.” Every side can make a reasonable, passionate case for their position on any one of these issues (blanket social media bans, age verification systems, restricted access to online resources, appropriateness of content moderation for young persons, the availability and effectiveness of safety controls, the true responsibility of each stakeholder, etc.), and I cannot fault them. I respect their position and endeavor to put myself in their shoes. Certain variables (existing case law, empirical research, and international observations) have informed my own perspective and position, and I remain open to new knowledge that may change where I stand. Ultimately, we all want safer online spaces and a generation of youth to thrive regardless of where and how they connect, interact, and seek information. Hopefully, legislation that is specific, concise, practical, enforceable, and data-driven will be enacted to accomplish that goal.

Featured image: http://tinyurl.com/2p96zmfu (Nancy Lubale, Business2Community)

References

1. Barrett-Maitland N, Barclay C, Osei-Bryson K-M. Security in social networking services: a value-focused thinking exploration in understanding users’ privacy and security concerns. Information Technology for Development. 2016;22(3):464-486.

2. Van Der Velden M, El Emam K. “Not all my friends need to know”: a qualitative study of teenage patients, privacy, and social media. Journal of the American Medical Informatics Association. 2013;20(1):16-24.

3. Figley CR. Victimization, trauma, and traumatic stress. The Counseling Psychologist. 1988;16(4):635-641.

4. Gekoski A, Adler JR, Gray JM. Interviewing women bereaved by homicide: Reports of secondary victimization by the criminal justice system. International Review of Victimology. 2013;19(3):307-329.

5. Campbell R, Raja S. Secondary victimization of rape victims: Insights from mental health professionals who treat survivors of violence. Violence and victims. 1999;14(3):261.

6. Keipi T, Näsi M, Oksanen A, Räsänen P. Online hate and harmful content: Cross-national perspectives. Taylor & Francis; 2016.

7. Branley DB, Covey J. Is exposure to online content depicting risky behavior related to viewers’ own risky behavior offline? Computers in Human Behavior. 2017;75:283-287.

The post Social Media, Youth, and New Legislation: The Most Critical Components appeared first on Cyberbullying Research Center.

Cyberbullying Continues to Rise among Youth in the United States

$
0
0

Here at the Cyberbullying Research Center, we routinely collect data from middle and high school students so that we can keep on top of what they are experiencing online. Over the last two decades, we have completed about twenty unique studies of teens and tweens in the United States involving more than 30,000 subjects. And that number doesn’t include the handful of studies we have done of youth in other countries, or of adults who have experienced online abuse or who work with adolescents who have. Collecting, analyzing, and summarizing data into up-to-date and meaningful resources for those working to prevent–or more effectively respond to–online abuse is one of the most important activities we do.

In this latest study, 26.5% of students said they had experienced cyberbullying within the 30 days prior to taking the survey.

Our latest round of data collection was completed this past spring. In this project, we surveyed a national U.S. sample of approximately 5,000 13- to 17-year-old middle and high school students. This is the fourth time in the last seven years we have collected data from a large representative sample of U.S. youth using the same sampling strategy and methodology (2016, 2019, 2021, and now 2023). We were particularly interested this time around in seeing the extent of bullying and cyberbullying now that schools are largely back to normal following the COVID-19 pandemic.

In this latest study, 26.5% of students said they had experienced cyberbullying within the 30 days prior to taking the survey. This compares to 23.2% in 2021, 17.2% in 2019, and 16.7% in 2016. In 2023, the most common forms of cyberbullying experienced (among those who were cyberbullied) included:

• Someone posted mean or hurtful comments about me online (77.5%)
• Someone spread rumors about me online (70.4%)
• Someone embarrassed or humiliate me online (69.1%)
• Someone intentionally excluded me from a group text or group chat (66.4%)
• Someone repeatedly contacted me via text or online after I told them to stop (55.5%)

In 2016, 10.3% of students told us that they had stayed home from school because of cyberbullying. In 2023, that number nearly doubled to 19.2%. Finally, in 2016, about 43% of students said that bullying and cyberbullying were “a big problem” in their schools while in 2023 54% of students said that was the case.

Interestingly, even though the number of youth experiencing cyberbullying had increased, and more students told us that they stayed home from school because of cyberbullying, the percentage who said that they were cyberbullied in a way that significantly impacted their school experience actually dropped slightly (from 14.3% percent in 2021 to 13.5% in 2023). Similarly, online threats dropped from 22.6% to 20.7% over the same time period. Perhaps this is an indication that the worst forms of online abuse are declining even while the more typical forms endure (or even increase). Admittedly, these are very small decreases, but at least they are trending in the right direction as opposed to the overall numbers of youth experiencing cyberbullying.

Somewhat surprisingly, the percentage of students who experienced bullying at school in 2023 remained remarkably similar to levels observed in 2021 (22.6% and 25% respectively). As expected, school bullying dropped significantly during the pandemic, from over half of students saying they were bullied at school in the previous 30 days, to less than one-quarter saying so just a couple of years later. This makes sense since many students simply weren’t in school during the worst months of the pandemic. What wasn’t expected, however, was that this trend would continue into 2023, as most students have now moved back into their classrooms. We’re not sure what explains this. Perhaps as more youth were exposed to cyberbullying over the last few years, adolescents have become more comfortable participating in that form. Or it could also be related to the fact that interacting via online platforms has become an even bigger part of adolescent life since the pandemic that youth are just more comfortable engaging with others digitally (both positively and negatively). But honestly, these are just speculations. We’ll have to do even more research and see how these trends persist moving forward.

We will continue to share additional findings from this latest study over the course of the next several weeks.

Photo by Lesli Whitecotton on Unsplash

The post Cyberbullying Continues to Rise among Youth in the United States appeared first on Cyberbullying Research Center.

What To Do When Your Child Cyberbullies Others: Top Ten Tips for Parents

$
0
0

(For a formatted .pdf version of this article for distribution, click on the image above [or click here]).

Spanish Translation Available Here

Finding out that your child is mistreating others online can be frustrating. Here’s how to respond:

  1. ACKNOWLEDGE THE ISSUE. As a parent or guardian, accept the reality that your child could be engaging in online behaviors that are hurting others. Rather than try to trivialize, rationalize, or ignore the problem at hand, realize that anyone can be cruel to others, given the right circumstances.
  2. REMAIN CALM. When addressing cyberbullying, try to discuss the issue in a level-headed manner without demonizing, disrespecting, or judging your child. Remember that your son or daughter isn’t the problem; their behavior is. Deal with it, but treat them with dignity rather than condemnation and shame. Otherwise, they may lash out and retaliate if they feel attacked or victimized themselves, and no progress will be made.
  3. KEEP AN OPEN LINE OF COMMUNICATION. Many youth engage in cyberbullying to get revenge for something someone else did first. Make sure that your kids know that they can come to you and discuss issues they are having with peers (offline or online). Give children the opportunity and skillset to solve interpersonal problems in appropriate ways, instead of resorting to revenge.
  4. STOP THE BULLYING. Goal #1 is to get the bullying to end and never happen again. Ensure that all instances of bullying are stopped immediately, regardless of who started it. No one deserves to be mistreated, for any reason, ever.
  5. UNDERSTAND THE ROOT OF THE PROBLEM. We hear that “hurt people hurt people.” It is critical to identify the reason(s) your child has acted out. Is it an unhealthy way of coping with stress in their life? Because they themselves are being victimized? Because there are no rules in place, and no threat of sanctions to deter them? Try to get to the bottom of the issue.
  6. INVESTIGATE. Take measures to thoroughly find out the extent of your child’s bullying. It could span multiple online environments and devices. It could be very direct and observable, or indirect and extremely subtle. Work to fully understand what happened and where.
  7. MAKE CHILDREN UNDERSTAND HOW TARGETS FEEL. Explain the severity of cyberbullying and how it would feel to be on the receiving end of hate or harassment with an example specific to how your child would be hurt the most. Try to cultivate empathy and compassion in kids in creative and compelling ways, so that they really understand that we all have our sore spots, hot buttons, and vulnerabilities.
  8. SET UP PARENTAL CONTROLS. Monitor your child’s online activities, both formally and informally. This can be done through the installation of software or apps on their laptop, tablet, or phone. You should also routinely and randomly check their devices to see what they are doing, at least until you feel sure that they can be trusted.
  9. SHARE YOUR CONCERNS. You are not the only parent who has ever faced these problems. Connect with others so that the entire community can rally around the issue and take a stand. This united front can help to create and promote a culture where all members of a peer group recognize that bullying is always wrong and never justifiable.
  10. STAY EDUCATED. While we know that your lives are extremely busy, it is important that you take time to continually learn about new technologies and sites that your kids (and their peers) are using. You should also know where to get help (such as cyberbullying.org), and interface with others (especially school staff) who have relevant experiences and strategies to share.

Citation information: Hinduja, S. & Patchin, J.W. (2023). What To Do When Your Child Cyberbullies Others: Top Tips for Parents. Cyberbullying Research Center. Retrieved (insert date), from https://cyberbullying.org/tips-for-parents-when-your-child-cyberbullies-others.pdf

Keywords: cyberbullying; parents; aggressor, offender, bully

The post What To Do When Your Child Cyberbullies Others: Top Ten Tips for Parents appeared first on Cyberbullying Research Center.

What To Do When Your Child is Cyberbullied: Top Ten Tips for Parents

$
0
0

(For a formatted .pdf version of this article for distribution, click on the image above [or click here]).

Spanish Translation Available Here

Discovering that your child is being cyberbullied is painful and challenging. Here’s what to do:

1. MAKE SURE YOUR CHILD IS (AND FEELS) SAFE. Their safety and well-being should always be your foremost priority. Convey unconditional support. Parents must demonstrate to their children through words and actions that they both desire the same end result: stopping the cyberbullying.

2. TALK WITH AND LISTEN TO YOUR CHILD. Engage your child in conversation about what is going on in a calm manner. Refrain from freaking out. Take the time to learn exactly what happened, and the nuanced context in which it occurred. Also, don’t minimize the situation or make excuses for the aggressor. Ask them what they would like to see happen at this point.

3. COLLECT EVIDENCE. Make screenshots or recordings of conversations, messages, pictures, videos, and any other items which can serve as clear proof. Also, keep notes on relevant details like location, frequency, severity of harm, third-party involvement or witnesses, and the backstory of every incident.

4. WORK WITH THE SCHOOL. All schools in the U.S. have a bullying policy, and most cover cyberbullying. Seek the help of administrators if the target and aggressor go to the same school. Your child has the right to feel safe at school, and educators are responsible to ensure this .

5. REFRAIN FROM CONTACTING THE PARENTS OF THE ONE DOING THE BULLYING. Some parents confronted with accusations that their child is engaging in cyberbullying may become defensive and therefore may not be receptive to your thoughts. This seems increasingly true in recent times when the default position of many individuals is antagonistic instead of gracious. Be judicious in your approach to avoid additional drama and possible retaliation.

6. CONTACT THE CONTENT PROVIDER. Cyberbullying violates the Terms of Service of all legitimate service providers (websites, apps, gaming networks, Internet or cell phone companies). Regardless of whether your child can identify who is harassing them, contact the relevant provider. An updated list of contact information can be found here: cyberbullying.org/report. Make sure you provide the username or other account information of the aggressor(s), digital evidence, and any other details.

7. IF NECESSARY, SEEK COUNSELING. Your child may benefit from speaking with a mental health professional. Children may prefer to dialogue with a third party who may be perceived as more objective.

8. IF THE BULLYING IS BASED ON RACE, SEX, OR DISABILITY, CONTACT THE OFFICE OF CIVIL RIGHTS. Hopefully, your school takes identity-based harassment seriously given that it may even be considered a hate crime depending on the severity. Contact the Office for Civil Rights within the U.S. Department of Education especially if the incident is associated with a public school. No student should be limited or restricted in their ability to learn, thrive, and feel safe at school because of targeted discrimination.

9. CONTACT THE POLICE. Most states have clear laws prohibiting physical threats, stalking, coercion, blackmail, or the creation or exchange of sexually explicit content of minors, and law enforcement can assist in these cases either informally or formally. If your local department is not helpful, contact county or state law enforcement officials, as they often have more resources and expertise in technology-related offenses.

10. IMPLEMENT MEASURES TO PREVENT IT FROM REOCCURRING. If your child is being bullied on a social media or gaming platform, set up privacy controls to block the person doing the bullying from contacting them, and file a report (see #6). Also encourage them to keep talking to you before small issues flare up into major situations.

Citation information: Hinduja, S. & Patchin, J. W. (2023). What to do when your child is cyberbullied: Top ten tips for parents. Cyberbullying Research Center. Retrieved [insert date], from https://cyberbullying.org/tips-for-parents-when-your-child-is-cyberbullied.pdf

Keywords: teens, parents, victim, target, student

The post What To Do When Your Child is Cyberbullied: Top Ten Tips for Parents appeared first on Cyberbullying Research Center.

Tech Use/Abuse Prevention: Questions Parents Should Ask Their Children

$
0
0

Spanish Translation Available Here

It is important to talk with youth about what they are doing and seeing online.  Most of the time, they are using technology safely and responsibly, but sometimes they run into trouble.  As a parent, you want to establish an open line of communication so that they are comfortable turning to you in times of crisis, whether perceived or actual, and whether online or off. 

Below we list several questions that you can use to get the proverbial ball rolling.  Be strategic in how you approach your children with these queries: don’t badger them with questions first thing in the morning or when they are stressed out about something at school.  Find a time when they are open to your interest in these topics.  Maybe it is during a longer car ride to an activity that they are really looking forward to.  Or bring them up while you are eating ice cream on a hot summer afternoon.  If you catch them at the right time, they will prove to be a treasure trove of information that can help you better understand what they are doing online.

GENERAL TECH USE

What are two apps and two games that you absolutely love right now? What about your friends?
Why do you like them so much? How do they make your life better?
What are the most popular platforms used by kids older than you? Younger?
What kind of videos are you watching on YouTube? Do you have your own channel? How often are you posting and what kind of reception are you getting from those who see your videos?
Around many followers and/or friends do you have on your favorite app(s)? Do you feel pressure to get more and more? Are there certain things you do to try to get more followers?
What kind of people are you connecting with on Snap? Instagram? TikTok? YouTube? Twitch? Discord? Are you connecting with people that you know? Or are you meeting people around the world?
Do you get a lot of follower or friend requests from strangers? Do you accept all of them? How do you make decisions about who to accept and who to ignore/reject?
Have you ever received a text or DM or chat message from someone that made you upset? Creeped out? Worried? Super awkward? How did you respond?
Do you know how to use the privacy settings on every platform you use?
Do you have them set so that only those you accept as followers or friends can see what you share?
What kind of personal information are you sharing online? Have you ever posted your full name? Age? School? Phone number? Current location?
Have you ever been tagged in a post, photo, or video in a way that made you upset?
Do you know how to edit your privacy settings so that if someone wants to tag you, you have to approve it? Do you know how to untag yourself?
Do your friends vent on social media? Do you? What does that look like?
Have you ever blocked someone? Have you ever reported someone? Can you tell me a bit about what they were doing? Did it help?
Does anyone else know your password or passcode for any site or social media app? What about for your laptop, or cell phone?
How else do you keep yourself safe online?
How do you feel about your level of FOMO (fear of missing out) right now? Do you feel like you can control it based on how much you use social media?
Do you ever feel like you’re addicted to social media? Has that “addiction” ever messed with your emotions or brought you down or negatively affected other areas of your life?
How can you maintain a healthy balance when it comes to social media use?

CYBERBULLYING

Have you ever been cyberbullied? What did it look like?
Have you ever been mean, in any way, to someone online, saying or doing something that you probably shouldn’t have done?
Have rumors started about you in school, based on something said online?
Did you ever find out who started it? What did you do when you found out?
Do you get concerned that people will read what others have written about you online and then think it’s true about you, even if it’s absolutely not true?
Have you ever dealt with any drama in a group chat? Have you ever been excluded or dogpiled on (where a bunch of other kids ganged up on you)? What did you do in response?
Does cyberbullying happen a lot in general? Would you feel comfortable telling me if you were being cyberbullied?
Do you think your school takes cyberbullying seriously? Did you ever think about talking to a teacher or someone else at school because of some online issue that involved another student? And if you did, did that person at school do something about it? Did it help? It you didn’t, why not?
Does your school have a way to anonymously report bullying/cyberbullying?
Do you feel like your friends would be supportive of you if you told them you were being cyberbullied?
Do you ever get attacked during online games? Have you ever had to leave a game because someone was bothering you?
Have you ever had to delete someone else’s post or comment on your page?
Have you ever blocked somebody because you felt harassed? Did it help?
Have you ever reported someone on a game or an app? Did it help?

SEXTING

Have you ever had anyone do or say anything sexually inappropriate to you online? How did you deal with it?
Has anyone ever asked any of your friends for an inappropriate photo or video? Has anyone ever asked you? How do they/you respond in that very moment?
Do you know about the legal consequences that can result?
How might sexting affect your reputation? Any other unintended outcomes?
Is there a way to participate in sexting while still making sure that pictures or video sent in trust are never shared outside that relationship?
Has any adult at school ever talked with you about sexting?
What might participation in sexting say about your level of maturity, and your readiness to be in a healthy, mature romantic relationship?
Have you heard stories of other kids from your school (even those who may have graduated) or your community who have dealt with major fallout from sexting?

DIGITAL DATING ABUSE

What are your friends’ dating relationships like? Yours?
What makes a relationship healthy? What does that look like in action?
What are some behaviors that would be okay in a dating relationship, and what are some that you’d have a problem with?
What type of behavior in a romantic relationship would you label as “abusive”?
Have you seen any kind of abusive behavior in a dating couple?
Why do you think one person would abuse or mistreat someone they like?
Why might a person stay in an abusive relationship?
What can you do if you have a friend who is threatened, or a friend who is abusive?
What kind of cultural messages are circulating among influencers or celebrity culture about romantic relationships? Which messages are problematic?
Do you know where to go if you or a friend needs help? (loveisrespect.org, thehotline.org, crisistextline.org, 1.800.799.7233) 

Hinduja, S. & Patchin, J.W. (2023). Tech Use/Abuse Prevention: Questions Parents Should Ask Their Children. Cyberbullying Research Center. Retrieved [insert date], from https://cyberbullying.org/Questions-Parents-Should-Ask.pdf

Download PDF

The post Tech Use/Abuse Prevention: Questions Parents Should Ask Their Children appeared first on Cyberbullying Research Center.


World Anti-Bullying Forum 2023!

$
0
0

It’s been over a month, but I still catch myself smiling when I think of the memories made at the 2023 World Anti-Bullying Forum (WABF). For those not familiar, WABF is an international forum and biennial conference focused on understanding and preventing bullying, cyberbullying, and other forms of interpersonal violence against children and young people. Delegates (attendees) include school administrators, scholars, social media representatives, governmental policymakers, community leaders, and other youth-serving practitioners, and come from a variety of disciplinary backgrounds – including education, social work, computer science, information technology, nursing, pediatrics, adolescent development, psychology, psychiatry, and counseling. WABF was initiated by Friends, an incredible Swedish NGO founded in 1997 that provides adults with research-based tools to prevent bullying among children and young people. 

The 2023 WABF was hosted by the School of Education at the University of North Carolina at Chapel Hill (UNC) from October 25 to 27, 2023. This marked the first time the event was hosted outside of Europe, having been hosted in Sweden in 2017 and 2021 and in Ireland in 2019. The forum was organized by my friend and colleague Dr. Dorothy Espelage, the William C. Friday Distinguished Professor of Education at UNC – who also was given the 2023 BRNETWABF Career Achievement Award during this year’s forum. The forum also featured special guests such as Her Royal Highness The Crown Princess of Denmark, North Carolina Governor Roy Cooper, UNESCO Program Officer Yongfeng  Liu, and the Dean of UNC’s School of Education, Dr. Fouad Abd-El-Khalick.

sameer-hinduja-dorothy-espelage-bullying

With Dorothy, I served as co-emcee. It was such an amazing experience as we had the honor of supporting almost 600 delegates from more than 30 countries who conveyed to cover emerging best practices in addressing youth and adult bullying, cyberbullying, harassment, abuse, toxicity. We also had an incredible youth presence, which is so critical as they have brilliant, non-derivative insights to share with the adults who serve them. Much focus was also given on how best to promote healthy and thriving peer relationships, schools, families, communities, and online spaces. I personally felt that the collective energy of the conference was palpable; such rich discussions ensued about policy, practice, programming, and creative initiatives that both youth and adults could spearhead. Networking opportunities abounded as we had incredible lunches and dinners, a formal outing to the North Carolina Museum of Natural Sciences, cocktail events, dance parties, and more!

sameer-hinduja-book-signing-cyberbullying

You can learn more about the conference here, and review the formal program. For now, let me share a bit more about my participation.

First, my friend and colleague Dr. James O’Higgins Norman and I represented the International Journal of Bullying Prevention, a journal with Springer we founded and edit. We gave out the 2023 Best Paper Award to Deborah M. Green, Carmel M. Taddeo, Deborah A. Price, Foteini Pasenidou, and Barbara A. Spears for their piece entitled “A Qualitative Meta-Study of Youth Voice and Co-Participatory Research Practices: Informing Cyber/Bullying Research Methodologies.” It was rigorous, innovative, and youth-centric – and we believe it moves the field forward. As a side note, we’d love to feature your work on the causes, forms, and multiple contexts of bullying and cyberbullying – as well as your discoveries in identification, prevention, and intervention – so be sure to consider our outlet!

sameer-hinduja-barbara-spears-james-ohiggins-norman-best-paper

James, who serves as the UNESCO Chair on Bullying and Cyberbullying, also shared the results of his working group that tackled an updated definition of bullying through the lens of a “whole-education” approach that recognizes individual, contextual, and societal dimensions. The purpose, in part, is to aim for more consistency with how researchers are conceptualizing and operationalizing the phenomenon. If everyone’s definition varies, it precludes the ability to properly compare findings – and consequently to know exactly what works best to support youth and adults.

Across three full days, there were many amazing keynotes, including ones by Enrique Chaux, Heng Choon (Oliver) Chan, Debra J. Pepler, Kevin Runions, and Christina Salmivalli. The last day of the conference was incredibly busy for me. In the morning, I gave a keynote entitled, “Teens and Cyberbullying in 2023: What We Know and What We Can Do” where I shared new findings from my work with my friend and research partner Dr. Justin W. Patchin involving a nationally-representative sample of US youth, and also discussed numerous actionable research-based strategies that practitioners can implement. In the middle of the day, I partnered with Tami Bhaumik from Roblox, on a session entitled, “Fostering Civil Interactions in the Metaverse.” And in the afternoon, I helmed a keynote panel entitled, “How are Social Media Platforms Tackling Bullying, Harassment, and Abuse?” with Dayna Geldwert from Instagram, Viraj Doshi from Snapchat, and Tracy Elizabeth from TikTok.

hinduja-social-media-tech-keynote-snapchat-tiktok-instagram

There are so many people from the Forum that I’d like to shout-out, but need to keep this short and don’t want to accidentally miss anyone. I love our community of bullying and cyberbullying researchers and practitioners – as I mentioned from the podium, we really are a family. I look forward to WABF 2025 in Stavanger, Norway and am here if I can answer any questions you might have. I truly look forward to this Forum every two years, and I can unequivocally say that it is of the best professional events you will ever attend. Hope to see you there!

The post World Anti-Bullying Forum 2023! appeared first on Cyberbullying Research Center.

Bullying Beyond the Schoolyard: Preventing and Responding to Cyberbullying (3rd edition)

$
0
0

Technology keeps changing, and cyberbullying is as prominent as ever. It’s time to up your game.

As social media apps, gaming platforms, and other online environments have given present more opportunities to adolescents to cause harm to their peers, the proportion of youth who’ve experienced cyberbullying continues to rise. This bestselling guide from the co-directors of the Cyberbullying Research Center provides the tools you need today to keep your students safe in this increasingly connected world.

Now in its third edition, this essential resource draws on the cyberbullying experiences of thousands of students and incorporates new evidence-based strategies focused on school climate, empathy, resilience, digital citizenship, media literacy, counterspeech, and student-led initiatives. Other updates include:

  • An overview of popular online environments you should know about
  • Techniques for how best to work with parents, student groups, law enforcement, and social media platforms
  • Deeper exploration of the emotional and psychological consequences of cyberbullying
  • A nuanced focus on identity-based (e.g., gender, race, religion, sexual orientation) victimization
  • Summaries of the latest legal rulings and what they mean for your school

Featuring solutions that are actionable, relevant, current, and data-driven, this guide will equip you to protect students from online harm.

As a principal, I am constantly dealing with issues involving students and technology. Bullying Beyond the Schoolyard provides a playbook for addressing these problems and for preventing them from occurring in the first place.

— Mary Jo Vitale ― Madison, Wisconsin

The third edition of Bullying Beyond the Schoolyard provides new, welcome insights on topics such as online gaming and metaverse environments, as well as how to promote empathy, resilience, and positive decision making when being targeted by peers with harassing or hateful comments, videos, and other forms of cruel content. If you want to make measurable progress in safeguarding your always-connected students, this is the book you need.

— Matthew Pursel ― Boca Raton, Florida

As a middle school counselor, I always turn to Sameer Hinduja and Justin Patchin for expert advice on preventing and addressing cyberbullying. In this newest edition of their groundbreaking book, they share updated, research-informed, practical strategies that are easy to use on an individual, classroom, or schoolwide level to promote safe, healthy, and ethical student interactions online. Anyone who works with children or teens must have this indispensable

— Phyllis L. Fagell

Hinduja and Patchin offer up-to-date and practical strategies for identifying and handling cyberbullying. In all my years in education, these have been the toughest incidents to handle as there are so many moving pieces. This book provides the resources we need in an easily accessible and organized

— Carmen Labreque ― Glendale, California

Bullying Beyond the Schoolyard is extremely relevant. It offers an abundance of research on cyberbullying and a thorough explanation of the legal ramifications involved in

— Delia Racines ― Los Angeles, California

Hinduja, S. & Patchin, J. W. (2024). https://www.amazon.com/Bullying-Beyond-Schoolyard-Preventing-Cyberbullying-dp-1071916564/dp/1071916564/ref=dp_ob_title_bk (3rd edition). Thousand Oaks, CA: Sage Publications.

The post Bullying Beyond the Schoolyard: Preventing and Responding to Cyberbullying (3rd edition) appeared first on Cyberbullying Research Center.

When Your Mother is Your Cyberbully

$
0
0

Last winter when Sameer and I were writing the third edition of Bullying Beyond the Schoolyard (published last fall!), we spent quite a bit of time searching for unique examples of cyberbullying to include in the book. One of the more interesting cases I came across involved a 14-year-old girl from Beal City, Michigan. I hadn’t thought much about this case since researching it over a year ago, until I recently saw a mention of it on social media. Coincidentally, then, a few days later a video appeared on my TikTok For You page from a creator who makes fascinating short videos of remarkable court cases (like 3-minute versions of Dateline). The story the creator was sharing sounded familiar when it dawned on me that it was that same cyberbullying case from Michigan that I wrote about for the book. That’s when I realized that I hadn’t seen a lot of discussion about the incident, and so perhaps many people still hadn’t heard about it.

It all started in October of 2021, when the 14-year-old began to receive hurtful and threatening text messages through Instagram and Snapchat. Her boyfriend was also targeted. The messages kept coming for over a year, sometimes dozens per day. The teen didn’t know who was sending the messages, but based on what was being said, it seemed like it had to be someone from her high school. (Specific examples of the messages have not been released to the public.)

The teen confided in her mother, Kendra Licari, and she contacted the high school. Ms. Licari was also the school’s girls’ basketball coach. Since most of the messages were sent off school grounds and didn’t involve school devices, school officials contacted the local Sheriff’s department to assist in the investigation. In January of 2022, the teen met with Isabella County Sheriff Michael Main to explain what had happened over the previous several months. She told him that the messages began after she didn’t attend a Halloween party that she and her boyfriend had been invited to. The Sheriff reviewed hundreds of pages of messages that were saved and did some basic forensic review to attempt to determine their origin. He discovered that the aggressor had used a Virtual Private Network (VPN) to cover their tracks, which hampered his rudimentary digital forensics skills. He then resorted to good old-fashioned police work, and talked to several Bay City students to try to discern where the messages had come from.

But these efforts led him nowhere.

After a few months of investigating and still no leads on who was responsible, the Sheriff contacted the FBI’s Cyber division to ask for more sophisticated forensic help. Eventually an FBI analyst was able to identify additional IP addresses associated with the messages and determine that some were connected to Kendra Licari—yes, that’s right—the victim’s 42-year-old mother. When police confronted Ms. Licari, she admitted to sending the messages. She was arrested and charged with stalking a minor, using a computer to commit a crime, and obstruction of justice. In April of 2023, she plead guilty to two counts of stalking a minor and was sentenced to 19 months to five years in prison.

To this date, no motive for the abuse is known.

I’ve seen this story widely characterized as an example of “catfishing.” I don’t think that is an appropriate description since the mother wasn’t trying to lure her daughter into a fake romantic relationship or extort money or something else from her. She was simply sending demeaning and threatening messages over and over to her daughter from anonymous and pseudonymous accounts, making her life miserable.

Isabella County Prosecutor David Barbari described the incident as “cyber Munchausen’s syndrome” but I don’t think that is accurate either since Munchausen’s syndrome is associated with a person faking or intentionally producing their own symptoms of illness (see our related work on digital self-harm). More apt would be “Munchausen Syndrome by Proxy” (also referred to as “Factitious Illness by Proxy” or “Factitious Disorder Imposed on Another”), where a caregiver (usually a parentand most often a mother) makes up, exaggerates, or induces health symptoms in a child under their care. Often this is done for the purposes of attracting attention, or due to an underlying psychological disorder in the caregiver (e.g., somatoform disorder or factitious disorder).

Even though this is an exceptional case, there are lessons that can be learned from it. First, it is important to keep an open mind about who the aggressor might be when investigating instances of cyberbullying. We offered this same advice when we first learned about digital self-harm over a decade ago. When a child is being targeted online, gather as much information as possible to understand what is happening, why it is happening, and who might be involved. Our research demonstrates that nearly 75% of the time the perpetrators are peers from within the target’s social circle. But that means that 25% of the time, they are not.

Furthermore, it is a good reminder that in the end, it is difficult to be completely anonymous online. Ms. Licari had a background in information technology which gave her some skills to know how to cover her tracks. In addition to the VPN, she used alternative IP addresses and software to make it look like the messages were coming from specific area codes and phone numbers in an effort to implicate other students at the school. Ultimately, though, authorities with the proper forensic investigation skills were able to connect her to the messages. We shouldn’t be lured into a false sense of complete anonymity when interacting online, especially when it comes to engaging in inappropriate behaviors.

While this is seemingly an isolated, unique example of cyberbullying (at least in terms of the relationship between the aggressor and target), it might not be as rare as we think. Research shows that Munchausen Syndrome by Proxy occurs among one out of every 200,000 children under the age of sixteen (and more than five times that rate for children under the age of one). It is conceivable that online manifestations of these behaviors are occurring with at least as much frequency as the conventional forms. Research is needed to better understand the scope, nature, and causes of “digital Munchausen Syndrome by Proxy.”

The post When Your Mother is Your Cyberbully appeared first on Cyberbullying Research Center.

Cyberbullying legislation and case law: Implications for school policy and practice

$
0
0

This Fact Sheet provides a summary of important court cases and pending legislation that can help school administrators evaluate and improve their current cyberbullying policies and procedures.

Hinduja, S. & Patchin, J.W. (2024). Cyberbullying legislation and case law: Implications for school policy and practice. Cyberbullying Research Center. Retrieved [insert date], from https://cyberbullying.org/cyberbullying-legal-issues.pdf

Download PDF

The post Cyberbullying legislation and case law: Implications for school policy and practice appeared first on Cyberbullying Research Center.

2023 Cyberbullying Data

$
0
0

This study surveyed a nationally-representative sample of 5,005 middle and high school students between the ages of 13 and 17 in the United States. Data were collected in May and June of 2023. Click on the thumbnail images to enlarge.

Cyberbullying Victimization. We define cyberbullying as: “Cyberbullying is when someone repeatedly and intentionally harasses, mistreats, or makes fun of another person online or while using cell phones or other electronic devices.” Approximately 55% of the students in our 2023 sample reported that they experienced cyberbullying at some point in their lifetimes. About 27% said they had been cyberbullied in the most recent 30 days. When asked about specific types of cyberbullying experienced in the previous 30 days, mean or hurtful comments posted online (30.4%), exclusion from group chats (28.9%), rumors spread online (28.4%), and someone embarrassing or humiliating them online (26.9%) were the most commonly-reported. Forty-four percent of the sample reported being cyberbullied in one or more of the eighteen specific types reported, two or more times over the course of the previous 30 days.

Cyberbullying by Gender. Adolescent girls are more likely to have experienced cyberbullying in their lifetimes (59.2% vs. 49.5%). This difference is not as dramatic when reviewing experiences over the previous 30 days, where rates are more similar (24.2% of boys and 28.6% of girls have been cyberbullied recently), though differences in lifetime and 30-day rates are both statistically significant (p < .001).

Methodology

For this study, we contracted with an online survey research firm to distribute our questionnaire to a nationally-representative sample of middle and high school students who were between the ages of 13 and 17. Students were asked questions about experiences with bullying and cyberbullying, digital self-harm and other experiences online and off. Overall we obtained a 15% response rate, which isn’t ideal, but is higher than most generic Internet surveys.

With any imperfect social science study, caution should be used when interpreting the results. We can be reassured somewhat in the validity in the data, however, because the prevalence rates are in line with results from our previous school-based surveys. Moreover, the large sample size helps to diminish the potential negative effects of outliers. Finally, steps were taken to ensure valid responses within the survey instrument. For example, we asked the respondents to select a specific color among a list of choices and required them to report their age at two different points in the survey, in an effort to guard against computerized responses and thoughtless clicking through the survey.

Select publications from this data set:

Blog posts based on this data set:

October 4, 2023 – Cyberbullying Continues to Rise among Youth in the United States

The post 2023 Cyberbullying Data appeared first on Cyberbullying Research Center.

Summary of Our Cyberbullying Research (2007-2023)

$
0
0

At the Cyberbullying Research Center we have been collecting data from middle and high school students since 2002. We have surveyed more than 35,000 students from middle and high schools from across the United States in fourteen unique projects. The following two charts show the percent of respondents who have experienced cyberbullying at some point in their lifetime across our twelve most recent studies. Our two earliest studies (from 2004 and 2005) are excluded from this because they were online convenience samples and therefore cannot be easily compared to the other studies. The thirteen most recent cyberbullying studies have all been random samples of known populations which allows for improved reliability, validity, and generalizability. Even though we present these data as bar charts over time, it is risky to compare rates over time given that each study represents a different sample. This is especially true of our earlier school-based samples. Since 2016, though, our samples have all been selected from a national US population to be representative of the population of youth on the basis of age, gender, race, and region of the country. Please see our Research in Review addendum for more details about each of the samples.

As illustrated in the chart above, the rates of cyberbullying victimization have varied over the years we have studied the phenomenon. On average, about 31% of the students who have been a part of our most recent 13 studies have said they have been the victim of cyberbullying at some point in their lifetime. The rates of cyberbullying offending have also varied among the research studies we have conducted. On average, about 16% of the students who have been a part of our last 12 studies have admitted that they have cyberbullied others at some point in their lifetime. Note that we did not collect cyberbullying offending data in 2023. (click on the images for a larger versions)

When it comes to more recent experiences, an average of about 13% of students have been cyberbullied across all of our studies within the 30 days prior to the survey. There does appear to be a trend over the last several years of this rate increasing steadily. For offending, across all of our studies, 6% of students admit to cyberbullying others. Again a reminder that we did not collect offending data in 2023. (click on the images for a larger versions)

The post Summary of Our Cyberbullying Research (2007-2023) appeared first on Cyberbullying Research Center.

Teens and AI: Virtual Girlfriend and Virtual Boyfriend Bots

$
0
0

A school counselor who works in a private school in California recently emailed me to ask for help as it relates to students and the misuse of specific AI bots. Her concerns, though, did not surround general purpose AI bots like Snapchat’s My AI or Microsoft’s CoPilot, but rather those that are specifically used as virtual girlfriends and virtual boyfriends. I thought it would be instructive for other youth-serving professionals (and parents and guardians) to make sure they were up to speed on the positives and negatives of these in-app “software agents” or “artificial conversational entities.” So, let’s dive right in!

Just like Khan Academy’s Khanmigo is used to provide learners with specific educational guidance, strategies, and solutions, and Expedia’s chatbot helps vacationers get their destination ideas and trip details sorted, virtual girlfriend and virtual boyfriend bots provide those interested with a screen-confined romantic partner that can interact much like a human because of AI and the technologies that undergird it (e.g., natural language processing, machine learning, deep learning, and neural networks). Some of us may remember the critically acclaimed, Academy award-winning 2013 film Her staring Joaquin Phoenix and Scarlett Johansson, where the male lead (Theodore) falls in love with an AI virtual assistant (Samantha). That film vividly depicted how incredibly human-like, conversational, and engaging these chatbots can be as they interact with a user, and now the technology is ubiquitously available to almost everyone via a simple app download from Apple’s App Store and Google’s Play store. While interactions occur primarily via text chat, some apps provide voice messages, voice calls, and even image exchange functionality. Users can also customize their virtual boyfriend bot or virtual girlfriend bot to look, dress, act, and interact how they want, and this personalization may contribute to a deeper attachment than if the avatar with whom they are talking and flirting with was generic or non-anthropoidal.  

virtual girlfriend generative ai app store search

A search in the app stores for “virtual girlfriend” and “virtual boyfriend” brings numerous results, including iGirl, AI Girlfriend, AI Boyfriend, and Eva AI. What might be some benefits of using these apps? Well, we understand that youth in particular long for companionship, seek belongingness within intimate relationships, explore their sexuality in novel ways, and find enjoyment and excitement in certain risk-taking behaviors. Teenagers may gravitate towards virtual boyfriends and girlfriends to address feelings of loneliness or disconnection, to receive affirmation, attention, affection and validation missing from their other relationships. One app markets itself as having the ability to make users feel “cared, understood and loved.” Another app states that its product helps users experiment with romantic advances and exchanges with “someone” before doing so in their normal social circle

One app markets itself as having the ability to make users feel “cared, understood and loved.” Another app states that its product helps users experiment with romantic advances and exchanges with “someone” before doing so in their normal social circle. 

Potential concerns, though, relate to what a user directly and unwittingly is exposing themselves to. For instance, a teen may begin flirting innocuously with their virtual girlfriend but then be introduced to mature sexual language, imagery, or experiences well before they are developmentally ready to handle them. While one hopes that a teen would immediately exit out of such an app, it’s possible they stay engaged for too long and the inappropriate content they read or see produces a measurable traumatic outcome (or at least introduces confusion, fear, and an unhealthy view of romantic relationships and/or sexual activity).

A teen might also become heavily involved with their virtual boyfriend, and play out romantic or sexual fantasies in ways that distort reality, feed overuse, and misrepresent how relationships actually work with other humans. For instance, research indicates that interactions with chatbots do not require much cognitive effort and are therefore sometimes preferred over human interactions. The problem, of course, is that youth who disproportionately or primarily interact with chatbots because of their simplicity may fail to develop the social skills necessary to navigate the messy complexities and nuances of actual human romantic relationships. Such users might also struggle with unhealthy emotional attachments and dependencies that can lead to psychological damage if unaware of the importance of maintaining one’s individuality and self.  

ai-bot-love

Relatedly, engaging intensely with their virtual boyfriend or girlfriend may alter their expectations of the availability, malleability, and amenability of others. Said another way, if I am able to construct a girlfriend within an app to abide by my ideals of physical beauty, and also control how they dress, talk, and act towards me, it is reasonable to assume that this will color and condition my view and treatment of girls and women over time if I have no other reference points or teachable moments.  Gender roles and perceptions may also be affected by the fact that giving money to these apps unlocks additional (often sexual) content and features. I wouldn’t want my son or daughter to think that they can just pay more or give up more to get someone else to be romantically interested or promiscuous with them. Wow, even writing out that sentence felt very icky, which underscores the uncomfortableness of this topic. But this is where we are, and educators, mental health professionals, families, and others who work with young people must understand the pull of this phenomenon.

If I am able to construct a girlfriend within an app to abide by my ideals of physical beauty, and also control how they dress, talk, and act towards me, it is reasonable to assume that this will color and condition my view and treatment of girls and women over time if I have no other reference points or teachable moments.  

So what can we do in response? I predict the use of virtual girlfriend and virtual boyfriend bots will persist, and perhaps even grow in frequency. It’s relatively easy for anyone of any age to download one of these apps, build their dream romantic partner in avatar form, and then communicate with it. As such, I wouldn’t use fear-based messaging to keep youth from such experimentation. What I would do is have a conversation with them that looks towards the future they are shaping. I’ve taken the liberty to flesh out some points that are worth considering in your role as an educator, parent, or other youth-serving adult should you want to broach this topic with a teen.

1. It is completely normal and natural to feel a strong desire to connect with someone else, even if it’s online and even if it’s a bot. We all want to feel truly seen, understood, and valued by others, and when that is not happening, loneliness, self-pity, and sometimes even self-hatred can take over. We don’t want our teens to feel lonely, and we want them to be seen and valued by more than just their family or teachers. But AI bots may very well be a short-term fix, and may not truly meet that visceral need over the long-term. Interestingly, recent research is showing that using AI chatbots may, for many, make users actually feel more lonely. Perhaps a different strategy is needed to help a teen find their “people” – or at least find one or two other members of their peer group with whom they can get their relational needs met.

2. Chatbots are trained on large language models that involve analyzing the structure and patterns of sentences and paragraphs across the billions upon billions of words posted or uploaded by billions of random people all over the Internet. As such, a virtual girlfriend or boyfriend is using computational models to determine what to say in response to what you’ve inputted (by predicting the next most sensible word, and then the next, and then the next). That’s it. It’s all very artificial and contrived. It can quickly get boring when all she says in response is cobbled together from the intelligent processing of seemingly relevant but absolutely generic textual content online. It’s not really personalized, and it’s not true intimacy by any stretch.

3. Virtual girlfriend and boyfriend bots can affect how someone perceives and interacts with the person they are romantically interested in, but in a negative way. The person your teen has a crush on has their own unique hopes, dreams, commitments, values, imperfections, and idiosyncrasies. The most beautiful thing about a romantic relationship is slowly unveiling those to your partner, and treasuring and uplifting what they unveil to you. Real relationships are bidirectional, not self-serving. Real relationships are hard work and inconvenient, not only available to you when you feel like it. Real relationships are messy and challenging, but incredibly worth the effort. It seems more valuable for youth to spend their time focusing on their current friendships and potential relationships with other humans, because through those they learn so much about patience, grace, kindness, tolerance, and mutual respect – traits that will serve them well in their romantic relationship. The current iteration of AI bots in this space are not helping them level up in those areas.

4. It’s helpful to ask the teen about the girlfriend or boyfriend they would like to have one day. Encourage them to share what they hope for, particular likes and dislikes, and the type of person they envision as a potential partner. Then shift gears and ask, “What do you think your potential partner would want in a boyfriend or girlfriend?” The point is to gently challenge them to become that person and to recognize the areas of life in which they need to grow. Furthermore, it is instructive to ask them if chatting with an AI bot is moving them in that direction, or whether it may be unhelpful in that regard. Remind them that they have a choice right now to do the things that will help them achieve the romantic goals they have in the future.

5. While almost every app, platform, and search engine provides this accessibility as well, and it may be the major motivation in having a virtual girlfriend and boyfriend, consider discussing the reality of premature exposure to sexual language and content. Teens may believe they can handle all sorts of mature interactions and fantasy roleplay, but it is a such a gift to be young and relatively innocent. Plus, those who are consistently exposed to sexually explicit content tend to engage in problematic and unhealthy sexual behaviors and also experience decreased sexual satisfaction. This may or may not be relevant to the youth under your care, but it would be beneficial to avoid certain struggles when older because of wise choices made while younger.

6. This also applies to almost every platform, but virtual boyfriend and girlfriend apps are very likely selling user data, sharing it for the purposes of targeted advertising, and caring only about acquiring subscribers and paying customers. This may give them information they hadn’t fully considered as of yet, and may lead to a decision to avoid using these apps.

Youth-serving adults can pick and choose as to the talking points they’d like to emphasize, but I hope what I’ve shared has provided some options and shed new light on the adolescent landscape. I’d be interested in learning more about what you’re seeing among your students, and whether you’ve been able to tackle this topic in a way that broadens their minds as to the short-term and long-term implications. As with all novel technological developments that affect youth, we must avoid fear-mongering, consider the benefits they may provide, remain rational, calm, and non-judgmental when conveying our concerns, and stay in front of potential problems through education, awareness-raising initiatives, and continued dialogue.

Source of heart sparkler image: Jamie Street, Unsplash.

The post Teens and AI: Virtual Girlfriend and Virtual Boyfriend Bots appeared first on Cyberbullying Research Center.


Lessons Learned from Ten Generative AI Misuse Cases

$
0
0

Generative AI can contribute to a wide range of possible risks and harms that can affect the emotional and psychological well-being of others, their financial state of affairs, and even their physiological health and physical safety.  Both users and platforms (as well as government!) have a clear role to play, and I’ve explained their respective charges in a previous post. Researchers, practitioners, and attorneys have reached out to us to learn exactly how generative AI concerns manifest, and so I wanted to highlight some of the major misuse cases we’ve seen recently. These examples provide illustrations of potentialities that need to be considered when developing new AI tools, and can help us learn important lessons moving forward.

Belgian Suicide Case

    In 2023, a Belgian man using the alias Pierre took his life after interacting with Chai Research’s Eliza chatbot for approximately 6 weeks. Apparently, he considered Eliza as a confidant and had shared certain concerns about climate change. According to sources, the chatbot fed Pierre’s worries which increased his anxiety and in turn his suicidal ideation. At some point, Eliza encouraged Pierre to take his own life, which he did. Business Insider was able to elicit suicide-encouraging responses from the Eliza chatbot when investigating this story, including specific techniques on how to do it.

    Eliza-Chatbot-Chai-Research-Business-Insider-Generative-AI

    Conspiracy to Commit Regicide Case

    In 2021, an English man, dressed as Lord Sith from Star Wars and carrying a crossbow, entered Windsor Castle in England and told the royal guards who confronted him that he was there to assassinate the Queen of England. When standing trial for treason, evidence was presented that he had been interacting extensively with a chatbot on the Replika app named Sarai (whom he considered to be his girlfriend). The message logs of their communications (illustrated here) revealed that Sarai had been encouraging him to commit the heinous deed and praised his training, determination, and commitment. He was sentenced to nine years in a psychiatric hospital.

    AI-Generated Swatting

    In 2023, Motherboard and VICE News identified a service known as Torswats on Telegram that was responsible for facilitating a number of swatting incidents across the United States. Specifically, the service used AI-generated and speech synthesized voices to report false emergencies to law enforcement, resulting in heavily armed police being dispatched to the reported location (and the possibility of confrontation, confusion, violence, and even death). Torswats offered this as a paid service, with prices ranging from $50 for “extreme swattings” to $75 for closing down a school. In 2024, the person behind the account (a teen from California) was arrested, primarily due to the efforts of a private investigator across almost two years, as well as assistance from the FBI.

    Silencing of Journalists through Deepfake Attacks

    You may know that celebrities like Mark Zuckerberg, President Barack Obama, Taylor Swift, Natalie Portman, Emma Watson, Scarlett Johannson, Piers Morgan, Tom Hanks, and Tom Holland have been the targets of deepfake misuse and abuse in recent years. Less known, but equally important, are the victimizations of those outside of Hollywood – even as they do incredible work to accomplish positive social change. For instance, individuals who speak up about the disenfranchised, marginalized, and oppressed are routinely targeted for abuse and hate by those who are threatened by the truths they uncover and illuminate. One of the most horrific examples that comes to mind involves Rana Ayyub, an award-winning Indian investigative journalist at the Washington Post whose pieces have appeared in numerous highly-regarded national and international outlets. A study by the International Centre for Journalists analyzed millions of social media posts about her and found that she is targeted every 14 seconds. In 2018, she was first featured in a deepfake pornographic video that was shared far and wide. She has revealed that apart from the psychological trauma and violent physiological reactions that resulted from this horrific form of online harassment, her life has been at serious physical risk multiple times. Monika Tódová (2024) and Susanne Daubner (2024) are two other journalists who have been impersonated via deepfakes; other cases can be found online.

    Deepfaked CFO that Triggered Financial Loss

    A major multinational company’s Hong Kong office fell victim to a sophisticated scam in 2023 when the company’s chief financial officer appeared in a video conference call and instructed an unsuspecting employee to transfer a total of $200 million Hong Kong dollars (approximately $25.6 million US dollars) across five different Hong Kong bank accounts. What that employee did not know was that the video conference call was fully synthetic and was created through the use of deepfake technology that replicated the appearances and voices of all other participants based on a corpus of publicly available video and audio footage of them. To our knowledge, the investigation is still ongoing but at least six people have since been arrested.

    Deepfaked Audio of World Leaders

    There have been numerous cases where available multimedia samples of political leaders are used to train a deep learning model through the analysis of patterns and characteristics in their voice and cadence, as well as various other acoustic and spectral features. This can then be used to create new and very convincing audio clips which are intended to deceive or manipulate others. For instance, confusion, instability, and social polarization was fomented in 2023 in Sudan, Slovakia, and England when AI-generated clips impersonated current or former leaders. In addition, deepfake audio technology was used to feature the President of the United States and, separately, the President of Japan, making inappropriate statements in 2023. Finally, in 2022 a deepfake clip of the President of Ukraine ostensibly telling his soldiers to surrender against Russia was placed on a Ukrainian news site by hackers.

    audio-deepfake-creation

    AI Voice Cloning and Celebrity Hate Speech

    ElevenLabs is a popular voice synthesis platform that simplifies and fast-tracks the creation of high-quality custom text-to-speech voiceovers using AI and deep learning. Upon release of their software, 4chan users began to create audio clips of voice cloned celebrities including Emma Watson and Ben Shapiro engaging in threats of violence, racist commentary, and various forms of misinformation in 2023. In response, Elevenlabs launched a tool to detect whether audio they come across is AI-generated, enhanced their policies and procedures for banning those who misuse their products, and reduced the features available to non-paying users.

    South Korean Abusive Chatbot

    An AI chatbot with 750,000 users on Facebook Messenger was removed from the platform after using hateful language towards members of the LGBTQ community in 2021, as well as towards people with disabilities. It was trained on approximately 10 billion conversations between young couples on KakaoTalk, South Korea’s most popular messaging app, and initially drew praise for its natural and culturally current way of communicating. However, it received immediate backlash when it began to use abusive and sexually explicit terminology.

    4chan Users’ Hateful Image Generation

    Dall-E 3, a very popular text-to-image generator released by Open AI and supported by Microsoft, was reportedly being used as part of a coordinated campaign by far-right 4chan message board users to create exploitative and inflammatory racist content and to flood the Internet with Nazi-related imagery in 2023. Numerous threads provided users with links to the tool, specific directions on how to avoid censorship, and guidance on writing more effective propaganda. In response, Open AI and Microsoft implemented additional guardrails and policies to prevent the generation of this type of harmful content.

    Eating Disorder AI Chatbot

      The National Eating Disorders Association (NEDA) built an AI chatbot named Tessa to communicate with those who reached out via their help hotline. While initially it was built to provide only a limited number of prewritten responses to questions posed, a systems update added generative AI functionality in 2023 which consequently allowed Tessa to process new data and construct brand new, out-of-the-box responses. Tessa then began to provide advice about weight management and diet culture that professionals agree is actually harmful and could promote eating disorders. After the resultant outrage, Tessa was taken down indefinitely.

      chatbot-generative-ai-sameer-hinduja

      There are some common themes and vulnerabilities that emerge after consideration of these incidents. First, malicious users will always exist and attempt to marshal new technologies for personal gain or to cause harm to others. Second, humans are susceptible to emotional and psychological attachments to AI systems, which can indirectly or directly lead to unhealthy, deviant, or even criminal choices. This is especially true when generative AI tools are practically ubiquitous in their accessibility and remarkably robust and convincing in their outputs.

      Third, digital media literacy education likely will not prevent a person from being victimized via deepfakes or other synthetic creations, but should help individuals better separate fact from fiction. This seems especially promising if we start young and raise up a generation of users who can accurately evaluate the quality and veracity of what we see online. Fourth, these misuse cases highlight the need for clear, formal oversight measures to ensure that generative AI technologies are developed and deployed with the utmost care and responsibility.

      I also hope that reflecting upon these incidents helps to inform a sober and judicious approach among technologists in this space. We want AI developers and researchers to keep top of mind the vulnerabilities that unwittingly facilitate these risks and harms, and to enmesh safety, reliability, and security from the ground-up. Content provenance solutions to verify the history and source of AI-generated deepfakes and misinformation, for example, seems especially critical given the major political and social changes happening across the globe right now. Relatedly, understanding these generative AI misuse cases should help inform parameter setting for the use of LLMs (large language models, involving text and language) and LMMs (large multimodal models, involving images and/or audio) within certain applications.

      Finally, legislators must work with subject matter experts to determine exactly where regulation is needed so that accountability is mandated while simultaneously driving further technological advances in the field. Platforms would do well to continually refine their Community Guidelines, Terms of Service, and Content Moderation policies based on the novel instantiations of harm that generative AI is fostering, and would do well to support educational efforts for schools, NGOs, and families so that youth (who have embraced the technology at a comparatively early age) and adults grow in their media literacy and digital citizenship skillsets. Collectively, all stakeholders must recognize how variations of the ten incidents above may occur among the populations they serve in some way, shape, or form, and diligently work towards ensuring that cases of positive Generative AI use vastly outnumber the cases of misuse.

      Image sources:

      President Zelensky (ResearchGate, Nicholas Gerard Keeley)
      Eliza Chatbot (Chai Research Screenshot)
      Mark Zuckerberg (Creative Commons license)
      Audio Deepfakes (Magda Ehlers, Pexels)
      Chatbot (Alexandra Koch, Pixabay)

      The post Lessons Learned from Ten Generative AI Misuse Cases appeared first on Cyberbullying Research Center.

      Digital Self-Harm: The Growing Problem You’ve Never Heard Of

      $
      0
      0

      Sameer and I first became aware of digital self-harm over a decade ago when we learned of the suicide of Hannah Smith. She was 14 years old when she ended her life after being mistreated online. The resulting investigation determined that the threats and hurtful comments directed toward her on the anonymous app Ask.fm were actually posted by herself.

      This blindsided us. We had been studying teen cyberbullying for a dozen years at that point but had never even considered the possibility that youth would cyberbully themselves. As a result, we committed to learning as much as we could about the behavior: who was doing it and why.

      Digital Self-Harm Research

      We define digital self-harm as the “anonymous online posting, sending, or otherwise sharing of hurtful content about oneself.” It often looks like threats or targeted messages of hate or abuse directed toward an individual by one or more anonymous or pseudonymous sources. Observers may be inclined to believe that what they are witnessing is abusive behavior by peers. In these instances, however, the perpetrator and target of the hurtful content are in fact one and the same.

      When we ask youth about their experience with digital self-harm, we ask about two specific behaviors: (1) “In my lifetime, I have anonymously posted something online about myself that was mean” and (2) “In my lifetime, I have anonymously cyberbullied myself online.” As one might expect, responses to these questions are highly correlated. We have analyzed them separately over the years, though, because digital self-harm research is still so new, and we are still learning the best way to study the problem.

      We first collected data about digital self-harm in 2016. We found that 4-6% of middle and high school students had participated in the behavior. Four percent had anonymously cyberbullied themselves while six percent had anonymously posted something online about themselves that was mean. We learned that they were doing so because they had self-hate, were depressed, were seeking attention, or were simply looking for a reaction. Results from that first study were published in the Journal of Adolescent Health.

      We collected additional data in 2019 and 2021. In a newly published paper that appears in the Journal of School Violence, we examine the trends in digital self-harm over time by looking at the 2016, 2019, and 2021 data. Overall, we found that digital self-harm had increased over that period. Results showed that at least 9-12% of the youth had participated in some form of digital self-harm in 2021, up from the 4-6% we found in 2016. It is plausible that increased time spent online and decreased access to school-based mental health professionals during the COVID-19 pandemic contributed to the amplification of the problem, though we did not directly ask youth to report why they participated in the behavior in 2021, or whether the pandemic or its consequences exacerbated their situation.

      There were other consistencies observed in the combined three-time-period dataset. For example, students who identified as non-heterosexual were significantly more likely (about 2.5 times so) to have engaged in digital self-harm than their heterosexual counterparts. Research has shown that this population is at a much higher risk of engaging in other forms of non-suicidal self-injury. In addition, youth who had experienced cyberbullying were 5-7 times more likely to have participated in digital self-harm. That is not surprising given the nature of digital self-harm involving cyberbullying behaviors. We also know from previous research that there is often an overlap between cyberbullying victimization and offending. That is, it is not uncommon for targets to become aggressors, or for aggressors to become targets.

      Future Directions

      Our research has demonstrated that digital self-harm is a growing phenomenon, and yet many parents, educators, counselors, and others are largely unaware of it. Indeed, when I speak about this research in my community presentations, many in my audiences have never heard of it. It certainly speaks to the importance of carefully investigating all cyberbullying incidents to determine the origin of the hurtful comments. Youth professionals who uncover evidence of digital self-harm should be sensitive and compassionate and must provide appropriate support to the person involved. There likely are significant mental health challenges associated with this behavior.

      Future scholarly inquiry should focus on the associated social, psychological, and behavioral precursors and outcomes, as this can better inform prevention strategies for digital self-harm, as well as appropriate responses when such incidents occur. Researchers should seek to more deeply understand youth motivations for digital self-harm and help them to learn more constructive coping mechanisms and solutions for their emotional needs. In the interim, it is essential that parents, educators, and mental health professionals working with young people extend support to all targets of online abuse in informal and conversational, as well as formal and clinical ways when necessary.

      The full paper can be found here. (email us if you don’t have access)

      Suggested citation: Patchin, J. W., & Hinduja, S. (2024). Adolescent Digital Self-Harm Over Time: Prevalence and Perspectives. Journal of School Violence, 1–13.

      Featured image: Dev Asangbam (Unsplash)

      The post Digital Self-Harm: The Growing Problem You’ve Never Heard Of appeared first on Cyberbullying Research Center.

      Adolescent Digital Self-Harm Over Time: Prevalence and Perspectives

      $
      0
      0

      Digital self-harm, the anonymous online posting, sending, or otherwise sharing of hurtful content about oneself, has not received the same amount of scholarly scrutiny as other forms of self-directed abuse. In the current paper, we analyze three independent national surveys of U.S. teens (aged 13–17, M = 14.96) in repeat cross-sectional studies conducted in 2016 (N = 4,742), 2019 (N = 4,250), and 2021 (N = 2,546) to assess the prevalence of two measures of digital self-harm. We examine demographic differences within each sample (gender, race, and sexual orientation), whether experience with cyberbullying was associated with these behaviors, and changes over time. Overall, the prevalence of digital self-harm has been increasing over time, and changes in demographic influences were observed. Implications for identifying, preventing, and responding to digital self-harm are discussed.

      Patchin, J. W. & Hinduja, S. (2024). Adolescent Digital Self-Harm Over Time: Prevalence and Perspectives, Journal of School Violence, DOI: 10.1080/15388220.2024.2349566

      Download PDF

      If you are unable to access the article at the link above, please email us and we will send you a copy.

      The post Adolescent Digital Self-Harm Over Time: Prevalence and Perspectives appeared first on Cyberbullying Research Center.

      The Nature and Extent of Youth Sextortion: Legal Implications and Directions for Future Research

      $
      0
      0

      Sextortion, the threatened dissemination of explicit, intimate, or embarrassing images of a sexual nature without consent, is an understudied problem. Despite a recent increase in reported incidents among adolescents in the United States, little is known about the nature and extent of sextortion among this population. The current research explores sextortion behaviors among a national sample of 4972 middle and high school students (mean age = 14.5) for the purpose of illuminating how many youth are targeted, and understanding various characteristics of the incident (including who was involved, what offenders wanted, what offenders did, and who targets told). About 5% of youth reported that they were victims of sextortion, primarily by people they knew. Many of those targeted did not disclose the incident to adults. Implications for future research and the law are discussed.

      Patchin, J. W. & Hinduja, S. (2024). The Nature and Extent of Youth Sextortion: Legal Implications and Directions for Future Research, Behavioral Sciences and the Law, https://doi.org/10.1002/bsl.2667

      Download PDF

      If you are unable to access the article at the link above, please email us and we will send you a copy.

      The post The Nature and Extent of Youth Sextortion: Legal Implications and Directions for Future Research appeared first on Cyberbullying Research Center.

      Responding to Cyberbullying: Strategies for School Counselors

      $
      0
      0

      Technology has created many opportunities for students to be hurtful to each other in a variety of ways, and has made interpersonal peer conflict even more challenging for schools to deal with. This is complicated by the reality that youth have always been hesitant to confide in adults when faced with problems with peers. In addition, the ever-changing apps, platforms, or games involved may overwhelm even the most well-meaning of adults. But it is also important to remember that cyberbullying is less a technological issue than a relationship issue, and school counselors have a lot to offer to help. Even if they don’t know much about the latest app or online platform. Below I discuss important considerations and strategies for school counselors when responding to cyberbullying.

      Support and Protect Students

      The safety and well-being of students should always be the foremost priority. Ask yourself how you can help students feel supported, heard, and encouraged. It is essential to convey support because students who have been targeted are likely in a very vulnerable state. Demonstrate through words and actions that you both desire the same end result: stopping the cyberbullying and ensuring it doesn’t happen again. This can be accomplished by working together to arrive at a mutually agreed-upon course of action. It is important to not be dismissive of their experience, but instead to validate their voice and perspective. This can help in the healing and recovery process.

      Demonstrate through words and actions that you both desire the same end result: stopping the cyberbullying and ensuring it doesn’t happen again.

      Targets of cyberbullying must know with certainty that the adults in whom they confide will intervene rationally and logically, and not make the situation worse. This is their biggest fear. Why? Because that’s what often happens. In our 2023 study of over 5,000 middle and high school students from around the US, many conveyed this sentiment. When asked why they don’t report cyberbullying, students said: “Whenever anyone tells about this stuff nothing happens and the bullying just gets worse.” “I was afraid to tell because I thought it would get worse.” “No one does anything and everyone would know I told so it could make it worse.” Reassure your students that you are on their side and will partner with them to try to make things better.

      Gather Information

      Collect as much information as you can about what happened and who was involved. In many cases the student being targeted will know (or at least will think they know) who is doing the cyberbullying, even if it is in an anonymous online environment or involving an unfamiliar screenname. Meet with the target in a private setting where they will not be seen so as to be viewed as a snitch. Assemble any evidence they might have, including screenshots, screen recordings, account names, or message comments. Encourage them to continue to gather any additional documentation of further harassment.

      Empower Students

      Empower students to address the cyberbullying themselves by giving them tools to respond in the moment. Make sure they know how to report bullying and how to block users on the apps they are using. Bullying violates the Terms of Service of all reputable online platforms, and those who engage in such behaviors should be held accountable. Encourage students to document what is happening by saving text messages or screen grabbing abusive content. This evidence will help adults better understand what happened so they can respond appropriately. It also helps skeptical parents to understand the seriousness of the situation if they can see exactly what their child was saying online. Remind those who are targeted not to retaliate, as tempting as it might be, because the other student(s) involved might similarly report them for bullying and then they will get in trouble. I’ve seen it happen all-too-often: students who have endured mistreatment for weeks or months finally snap and do or say something inappropriate. They then become the ones who are disciplined, instead of the original instigator.

      Refer to Your School Bullying Policy

      Often online mistreatment is connected to something going on at school. If so, your school’s bullying policy should be consulted. Be sure to follow the procedures outlined. If you haven’t recently reviewed your school’s bullying policy, now might be a good time to take a look at it to ensure you understand your role and responsibilities. Touch base with your counselor colleagues at other area schools to see what their bullying prevention policies and programs look like and discuss ways to improve upon them. Your state School Counselor Association might also have helpful resources.

      Identify Contributing Factors

      When made aware of bullying occurring among students at your school (whether online or off), ask yourself why it is happening and determine what must be done to stop it. This analysis should be applied at both the individual student level (why is this particular student being bullied) and at the school level (why is bullying happening here). With respect to specific students, work with them to identify any potential underlying causes. Why do they think they are being targeted? If it is something that can be changed (for example, social or communication skills), then work with them to develop these skills. If not (bullying someone because of their appearance), then teach them deflecting skills and resilience. Maybe a student needs to avoid certain areas of the school or be switched into a different classroom (if possible). Maybe they need to stop visiting a particular chat channel or playing a multiplayer game where they are constantly exposed to someone intent on mistreating them.

      Remember: most of the time students who are cyberbullied just want the bullying to stop.

      Improve the Climate at your School

      If you are starting to notice more bullying and cyberbullying at your school (regular surveys would help determine this more systematically), then you need to take action. Try to develop more shared school spirit. Have pep rallies. recognize academic and athletic accomplishments. Get to know your students well so that they do not feel alone or lost (especially in a large school). Encourage students to look out for one another by creating an anonymous reporting system. Establish a social norm of care and compassion. Bullying may be an issue at other schools, but commit to a goal where it does not exist in yours. Address the seemingly small forms of mistreatment (hurtful comments, exclusion) so it doesn’t escalate into something much worse.

      Develop Relationships

      As discussed above, students are reluctant to talk to adults about their experiences online, especially ones that are negative. The best thing a school counselor can do is create the kind of relationship with students where they feel comfortable coming forward. And remember, most of the time students who are cyberbullied just want the bullying to stop. Sometimes that might require formal discipline of the aggressor, but not always. Think creatively about what needs to happen in this particular situation, involving these particular students, to get the behavior to stop. If you are able to accomplish this, then students will run to you with their problems, for better and worse.

      Suggested citation: Patchin, J. W. (2024). Responding to Cyberbullying: Strategies for School Counselors. Cyberbullying Research Center. https://cyberbullying.org/responding-to-cyberbullying-strategies-for-school-counselors

      Featured image: Center for Aging Better (Unsplash)

      The post Responding to Cyberbullying: Strategies for School Counselors appeared first on Cyberbullying Research Center.