UPDATED. Ethics Midterm Monitor (Dear Mr. #MarkZuckerberg Dear Mr. #EvanClarkWilliams Dear Mr. #JackDorsey Dear Mr. #SundarPichai -from #universityofthephilippines students)Oct. 16 at 11:59pm

    UPDATE (Correction)  : The deadline is Wednesday as usual (as practised) — Wednesday is Oct. 16, not 14. Apologies.

Ethics Midterm Monitor (Dear Mr. Mark Zuckerberg / Dear Mr. Evan Clark Williams/ Dear Mr. Jack Dorsey/ Dear Mr. Sundar Pichai – from University of the Philippines students) Oct. 14 at 11:59pm

      To cap the five-week discussion on social media behavior, standards, and phenomena, please give your informed analysis on the following in a letter-format addressed to the owners/ CEOs of (choose one) 1. FB or 2.Twitter or 3.IG or 4.YouTube (Dear Mr. Mark Zuckerberg / Dear Mr. Evan Clark Williams/ Dear Mr. Jack Dorsey/ Dear Mr. Sundar Pichai), citing authorities/ published articles by experts as your basis, in order to recommend reforms to the organizational structure, management content-creation and enforcement of rules and standards:

        A.(In your letter, please address the following) Whether the business model, organizational structure, management content-creation and enforcement of rules and standards of (choose one) 1. Twitter or 2.FB or 3.IG or 4.YouTube are enough to safeguard the well-being of its users and audience — Why or why not?

     B. In your letter: Specify an extended and unchecked violation or “massive” violation, based on your research, or a good practice to illustrate your point, cite authorities/ published articles by experts as your basis.

     C. In your letter: Describe what had been the the effect/effects of such extended and unchecked violation or “massive” violation on a what had happened in a specific national event/ activity or a specific social behavior/ personal behavior or a specific real-world phenomenon or personal situation.

     D. In your letter: Recommend a specific reform to the organizational structure, management content-creation, and enforcement of rules and standards of the company

      ( NOTE: Only one specific reform per post, as class members are not allowed to copy the recommendation already provided by another class member (earlier or priorly) – it pays to post promptly).

      (20 pts. Class members need to provide a discussion for all of items A, B, C, and D (in a letter-format as described), otherwise, the points would be deducted from). Deadline before midnight on Oct. 16, 2019 or at 11:59pm.)

     Happy daring-to-pioneer- reforms- in- social- media, everyone!

31 comments

  1. Dear Mr Sundar Pichai:

    I am writing to express my concern over the management of content-creation on YouTube. I believe that despite the policies already implemented, it is not enough to safeguard the well-being of its users and audiences, especially since YouTube is widely used by children. Despite YouTube’s best efforts in age restrictions, children are still susceptible to videos containing disturbing content.

    According to experts, the filtering of content in YouTube is not enough–even if there is already a platform specifically for children called YouTube Kids. Leilani Carver, PhD, and a professor of strategic communication at Maryville University, stated: “In the worst cases, small children may have accidentally streamed pornography, bestiality, extremely violent and terrifying content, drug use and/or just weird stuff. Most of these things were not initially posted on YouTube Kids but slipped through the filters” (Brandon, 2019). Some are even disguised as harmless videos seemingly about Peppa Pig or other children’s TV shows, which is easily accessible especially because of YouTube’s recommendation algorithm (Timberg, 2019).

    Dr. Carver mentioned that YouTube filters its content partly through bots, humans, and the flagging of content by users (Brandon, 2019). However, this does not seem to be as effective as it should be. In early 2019, a Florida mother claimed she found videos on YouTube and YouTube kids with instructions on how to commit suicide (Criss, 2019). She also mentioned finding other disturbing videos with topics such as domestic violence, sexual exploitation, and human trafficking. Researchers say that this can cause problems for children lacking in maturity since they may not have the discretion to turn away from this kind of content (Timber, 2019), which may in turn influence their attitudes and behavior. The content in these videos may also cause them trauma.

    In order to address this, I would like to recommend that filtering and monitoring of YouTube videos should be done primarily by employees of the company instead of relying on a mix of users and bots to ensure the safety of children.

    Thank you for your consideration.

    Brandon, J. (2019, September 9). Is Google doing enough to protect kids from disturbing YouTube videos? Retrieved from https://www. foxnews. com/tech/google-kids-disturbing-youtube-videos.

    Criss, D. (2019, February 25). A mom found videos on YouTube Kids that gave children instructions for suicide. Retrieved from https:// amp. cnn. com/cnn/2019/02/25/tech/youtube-suicide-videos-trnd/index. html

    Timberg, C. (2019, March 15). Young Children Can Easily See Disturbing Content on YouTube Despite Age Restrictions. https://www. independent. co. uk/life-style/gadgets- and- tech/ youtube- kids-children-videos-age-restriction-peppa-pig-a8824261. html

  2. Dear Mr. Sundar Pichai:

    Good day.

    I would like to raise some concerns with regard to hate speech content on YouTube. As it stands, the current policies addressing this remains widely problematic and underutilized.

    Controversies surrounding hate speech content on the video sharing website—as well as the consequent interventions—should not be new to you. YouTube has been subject to such criticisms in the past, arguing that the website had actively promoted conspiracy theories, falsehoods, incendiary statements, extremist content, misleading videos, among others. I would like to outline specific cases. Multiple videos from YouTube content creator, Steven Crowder, targeted a gay reporter of a prestigious publication and attacked him on the bases of his race and sexual orientation (Romano, 2019). Bellingcat, an investigative news site, found that YouTube was the single most frequently discussed website of current fascist activists (Evans, 2018). Even those who aren’t as fueled by hate can be radicalized by the platform, with former extremists saying that they had been sucked in by propaganda through YouTube’s problematic algorithm (Weill, 2018; Roose, 2019).

    What is most troubling, to me, is that the effects of hate speech promulgated through YouTube are already experienced today. Those who challenge the criticisms of YouTube, even those inside the company, are forced to remain silent in fear of being “doxxed” (publishing private/identifying information on the internet with malicious content) by right-leaning colleagues and websites (Farokhmanesh, 2019; Tiku, 2018). On a global level, YouTube has been credited for the success of the far-right movement in Brazil, whose president maintained a channel where he’d attacked the gay community, among others (Fisher & Taub, 2019). Your management has even sought to justify hate speech if it’s part of a larger debate (Romano, 2019). Perhaps the worst consequence of this is the rise, and the ongoing process, of radicalization through YouTube, a phenomenon that has been detailed by and journalists and scholars alike (Ingram, 2018; Tufekci, 2018). I am aware of the changes that have arisen in the platform since, such as your recent purging of over 100,000 videos with harmful content and enacting significant policy changes (Google, 2019). However, that it has taken this long—and only after receiving widespread flack and media attention—to remove such content should be put under scrutiny.

    With this, I am recommending that you play a far firmer hand. Perform a more rigorous verification process towards people who wish to upload videos of a political matter. Ensure that those who have been suspended, or permanently removed, from YouTube in the past cannot engage in the platform anymore. Take necessary action in identifying hate-filled messages that are beyond surface-level—messages that cannot be identified by bots or algorithms but by people themselves. Make the process more active by enabling people (common users, such as myself) to report these problems and to be responded by a member of your team accordingly. This is a challenge that demands further action not solely from YouTube’s executives and the producers of such video content, but also from the people who use your platform and who are actively affected by the promulgation of hate on a regular basis.

    Thank you for your consideration.

    References:
    Evans, R. (2018, October 11). From memes to Infowars: How 75 fascist activists were ‘red-pilled’. Retrieved from https:// http://www.bellingcat.com/news/americas/2018/10/11/memes-infowars-75-fascist-activists-red-pilled/

    Farokhanesh, M. (2019, Jun 7). Google’s LGBTQ employees are furious about YouTube’s policy disasters. Retrieved from https://www.theverge.com/2019/6/7/18656540/googles-youtube-lgbtq-employees-harassment-policies-pride-month

    Fisher, M. & Taub, A. (2019, August 11). How YouTube radicalized Brazil. Retrieved from https:// http://www.nytimes.com/2019/08/11/world/americas/youtube-brazil.html

    Google. (2019, September 3). The four R’s of responsibility, part 1: Removing harmful content. Retrieved from https://youtube.googleblog.com/2019/09/the-four-rs-of-responsibility-remove.html

    Ingram, M. (2018, Sep 19). Youtube’s secret life as an engine for right-wing radicalization. Retrieved from https://www.cjr.org/the_media_today/youtube-conspiracy-radicalization.php

    Romano, A. (2019, Jun 5). YouTube may allow hate speech if it’s part of a larger argument. Retrieved from https://www.vox.com/identities/2019/6/5/18653900/youtube-lgbtq-hate-speech-policy-carlos-maza-steven-crowder
    Roose, K. (2019, Jun 8). The making of a YouTube radical. Retrieved from https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html
    Tiku, N. (2018, Jan 26). The dirty war over diversity inside Google. Retrieved from https://www.wired.com/story/the-dirty-war-over-diversity-inside-google/
    Tufekci, Z. (2018, Mar 10). Youtube, the great radicalizer. Retrieved from https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html
    Weill, K. (2018, Dec 19). How YouTube built a radicalization machine for the far-right. Retrieved from https://www.thedailybeast.com/how-youtube-pulled-these-men-down-a-vortex-of-far-right-hate

  3. Dear Jack Dorsey,

    This letter is meant to call your attention regarding the implementation of Twitter’s community guidelines.

    The unprecedented influence of Twitter as a social media platform has helped connect users all around the world––it has also provided them with a voice online. Whether or not they use this “voice” responsibly or not, it is your mandate as CEO to ensure that Twitter’s rules and policies are properly enforced. In this digital age, one’s words online can greatly affect people at a personal level; this can also reach members of a community at a national level. It boils down to the argument that a simple tweet can have socio political impacts to our world. Twitter’s rules and policies are meant to make sure that these aid in creating positive and hopefully, inspiring contributions to our respective communities. This requires a taxing surveillance of tweets in order to achieve this vision. However, there are some users that still manage to get away with violating Twitter’s guidelines.

    A solid example for this is a seemingly harmless tweet posted by Priyanka Chopra on February 27, 2019. She said “Jai Hind #IndianArmedForces” and quickly received backlash online. In English, Jai Hind translates to “Victory to India.” To contextualize this issue, India and Pakistan have a long-shared history of conflict. Over thousands of lives were killed over the years as a result of military interference. Chopra’s tweet went out soon after the Balakot strike that occurred in the early hours of the morning on February 26, 2019. Her hashtag seemed to support her country’s military’s airstrike. The Indian government claimed that there were no casualties. However, some western sources argued that this statement is false and the number of people injured and killed over 50 lives.

    Chopra is in violation of the glorification of Twitter’s violence policy wherein it is forbidden to glorify, praise, condone, or celebrate violent acts committed by civilians that resulted in death or serious physical injury. She is an esteemed public figure––actress, philanthropist, and human rights activist. Take not that her Twitter account is still verified. Her advocacy for peace has earned her recognition as a UN ambassador in 2010. Therefore, people hold her to a higher standard in this situation. As an effect, critics called to have Chopra removed as UN ambassador. Even Pakistan filed an official complaint with the United Nations for fueling tensions between the two nuclear-armed countries. Younger users even started calling her a hypocrite for praising military action in such a public platform. Despite the amount of flak, Chopra did not delete her tweet and Twitter failed to flag this post down.

    It is true that one cannot control the response of posts on social media. People are more informed, aware, and sensitive to social issues; therefore, Twitter’s policies should also adjust to these changing times. I propose that verified accounts will lose their verification if they violate any of these community guidelines. Those monitoring the glorification of violence policy on Twitter should also recognize that Chopra’s tweet should be taken down as soon as possible.

    I hope that this letter will sway your decision to reconsider Chopra’s verification status, and to better enforce the rules and policies of Twitter.

    Thank you.

    Sources:
    Abi-habib, M., & Ramzy, A. (2019, February 26). Indian Jets Strike in Pakistan in Revenge for Kashmir Attack. Retrieved October 14, 2019, from https://www.nytimes.com/2019/02/25/world/asia/india-pakistan-kashmir-jets.html.

    Al Jazeera. (2019, August 21). Pakistan asks UN to remove Priyanka Chopra as goodwill ambassador. Retrieved October 14, 2019, from https://www.aljazeera.com/news/2019/08/pakistan-asks-remove-priyanka-chopra-goodwill-ambassador-190821081840304.html.

    Glorification of violence policy. (2019, March). Retrieved October 14, 2019, from https://help.twitter.com/en/rules-and-policies/glorification-of-violence.

    Mailonline, W. C. F. (2019, March 1). Priyanka Chopra accused of ‘glorifying war’ in tweet supporting for India in stand-off with Pakistan. Retrieved October 14, 2019, from https://www.dailymail.co.uk/news/article-6760995/Priyanka-Chopra-accused-glorifying-war-tweet-supporting-India-stand-Pakistan.html.

  4. Dear Mr. Mark Zuckerberg and Mr. Adam Mosseri:

    Good day!

    In the past few years (from 2010 to 2019), the younger generation has been switching from Facebook to Instagram. Instagram is currently becoming the most popular social media app around the world. It was stated in studies that Instagram has been effective in interacting not only for personal reasons but also for business purposes. However, I would like to express my concern over the following: 1. Terms and Conditions regarding business purposes and copyright infringement, and 2. Self-harm/self-esteem.

    For my first concern, I believe that even though that the terms and conditions are already implemented, the terms are confusing and mostly are written using a complicated language (ex: the license granted is described as “a non-exclusive, royalty-free, transferable, sub-licensable, worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works of”. Moreover, in most of the social media accounts, users must read and agree with the policies before they can have an account. With Instagram, according to Hayleigh Bosher, a lecturer in Intellectual property law, the minimum age to sign up is at the age of 13. But reading the age for terms is closer to the university level. Second, your app claims that it does not take ownership of its own users’ content, but the terms stated that the user grants Instagram a “non-exclusive, fully paid and royalty-free, transferable, sub-licensable, worldwide license to use their content”. Third, regarding the copyright infringement and intellectual property, it was stated in your policies that Instagram can license a user’s image or video to any third party for free (in any promotions or purposes) without seeking permission, giving any notice or offering any payment to the original user. I believe that this includes the ability to edit, share, pass the rights, and copy the content. There is also confusion or errors when posting an image regarding copyright infringement. Since in the terms, users agree that they either own all the content they post or have sought permission to use it, a person can be sued for this issue even though that it is her/his content (ex: Khloe Kardashian faced legal action (which was later dropped) after she posted a photograph of herself on her own Instagram which was owned by a photographic agency). Lastly, since there are no specific statements about the geographical restriction on the agreement, images, videos, and information of the owner can be in harm and can do or commit anything mentioned earlier which is why the rules and standards of Instagram are not enough to safeguard the well-being of its users and audience.

    For my last concern regarding harm and self-esteem, since Instagram is a search engine for billions of images, taking down of photos regarding self-harm is not enough. There are also articles and growing concerns about the effect of social media on self-esteem. According to Clarissa Silva, a scientist, strategist, relationship expert, and entrepreneur,”60% of people using social media reported that it has impacted their self-esteem in a negative way, 50% reported social media having negative effects on their relationships, 80% reported that is easier to be deceived by others through their sharing on social media”. There were reports in BBC about the images on Instagram regarding self-harm that resulted in numerous suicidal and harm effects. Moreover, since there are a lot of celebrities and body goals images in the app, young people are prone to commit eating disorders, experience low self-esteem, and use bodies, followers, and likes as a weapon of power (Underwood, 2019).

    To address these concerns, for my first concern, I would like to recommend that Instagram should improve and clarify its copyright policies regarding to its agreement with the copyright claims. Instagram should also introduce a copyright education tool to provide awareness about the upcoming laws to the users and provide a tool to inform the user when someone is stealing and copying their content. I believe that users (especially companies and artists) are aware of this because they sell images and post their work with their own effort. Lastly, regarding my second concern, since in one of your status/post, Instagram stated that “We want your friends to focus on the photos and videos you share, not how many likes they get…”, I believe that removing numbers or likes system will improve the app because this will avoid insecurities and comparing accounts (people won’t think less of themselves if a user don’t get enough of likes on a photo or more than their friends/followers). Regarding business purposes, Instagram should experiment and introduce other ways to better achieve making the app as a marketing tool. Lastly, to avoid self-harm, I believe that Instagram has been implanting and banning all the graphic self-harm images from the platform. I recommend that Instagram should highly monitor “hashtags” since it is an engine that is commonly used today. In doing this, this will ensure the safety of the users and will provide the application a less toxic medium.

    Thank you for your consideration.

    References:

    FIngas, J. (2018, October 1). Instagram’s new CEO is Facebook veteran Adam Mosseri. Retrieved from https://www.engadget.com/2018/10/01/instagram-head-adam-mosseri/

    Huang, Y. & Su, S. (2018, August 9). Motives for Instagram Use and Topics of Interest among Young Adults. Retrieved from https://www.researchgate.net/publication/326948568_Motives_for_Instagram_Use_and_Topics_of_Interest_among_Young_Adults

    Kambhampati, S., Manikonda, L., & Hu, Y. (2014, January). What We Instagram: A First Analysis of Instagram Photo Content and User Types. Retrieved from https://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/download/8118/8087

    Instagram. (2019, April 19). Terms of Use. Retrieved from https://help.instagram.com/581066165581870

    Huey, L. & Yazdanifard R. (2014, September). How Instagram can be used as a tool in social networking marketing. Retrieved from https://www.academia.edu/8365558/How_Instagram_can_be_used_as_a_tool_in_social_network_marketing

    BBC News. (2019, January 23). Instagram boss responds to suicide claims. Retrieved from https://www.bbc.com/news/av/technology-46979461/instagram-boss-responds-to-suicide-claims

    BBC News. (2019, February 5). Self-harm content ‘grooms people to take own lives’. Retrieved from https://www.bbc.com/news/uk-47127208

    Lavis, A. & Winter R. (2019, February 8). Self-harm and social media: a knee-jerk ban on content could actually harm young people. Retrieved from https://theconversation.com/self-harm-and-social-media-a-knee-jerk-ban-on-content-could-actually-harm-young-people-111381

    Underwood, M. (2019, July 16). Your body as a weapon: the rise of the ‘revenge body’ online. Retrieved from https://theconversation.com/your-body-as-a-weapon-the-rise-of-the-revenge-body-online-118332

    Instagram. (2019, July 18). We want your friends to focus on the photos and videos you share, not how many likes they get. You can still see your own likes by tapping on the list of people who’ve liked it, but your friends will not be able to see how many likes your post has received.” [Twitter Post]. Retrieved from https://twitter.com/instagram/status/1151605660150194176

    Orlando, J. (2019, July 19). What’s not to like? Instagram’s trial to hide the number of ‘likes’ could save users’ self-esteem. Retrieved from https://theconversation.com/whats-not-to-like-instagrams-trial-to-hide-the-number-of-likes-could-save-users-self-esteem-120596

    Silva, C. (2017, February 22). Social Media’s Impact On Self-Esteem. Retrieved from https://www.huffpost.com/entry/social-medias-impact-on-self-esteem_b_58ade038e4b0d818c4f0a4e4

    John A. & Wood S. (2019, August 12). Social media isn’t causing more eating disorders in young people – new study. Retrieved from https://theconversation.com/social-media-isnt-causing-more-eating-disorders-in-young-people-new-study-119959

    Bosher H. (2018, September 12). Ten things you should know about Instagram’s terms of use. Retrieved from https://theconversation.com/ten-things-you-should-know-about-instagrams-terms-of-use-102800

    Mclaughlin K. (2018, February 22). EXCLUSIVE: Photo agency that sued Khloe Kardashian for more than $175,000 after she shared THIS photo on Instagram agrees to dismiss its case. Retrieved from https://www.dailymail.co.uk/news/article-5423741/Photo-agency-agrees-drop-suit-against-Khloe-Kardashian.html

  5. Dear Mr. Mark Zuckerberg:

        Greetings.
    
    I am writing to you to inform you that the steps Facebook is currently taking to enforce its policies and community standards to safeguard its users is not sufficient. Several abusive and offensive posts were posted undedected, some of which are still up until now. Users are violating many of the policies which are supposed to keep the Facebook community safe, and even the Facebook team themselves are violating their users’ right to privacy. Among these violations that still circulate are fake news, hate speech, bullying, and privacy breach. 
    
    Fake news on the Internet, have been more widely shared on your platform than other social media platforms (Silverman, 2016). Sadly, many users who see fake news articles claim that to believe them (Silverman & Singer-Vine, 2016). Putting these facts together, fake news plays a powerful and influential role in the digital realm, affecting even national events. It even played a big part during the 2016 USA Presidential Campaign. Facebook has been caught in the center of the issue, being accused of swinging some voters in favor of then-candidate Donald Trump through allowing misleading and outright wrong stories to be spread on the platform (Wingfield et al., 2016). These included a report from wtoe5news.com with the headline “FBI agent suspected in Hillary email leaks found dead in apparent murder-suicide.” and that Pope Francis had personally endorsed Trump’s campaign (Alcott & Gentzkow, 2017). Experts and analytics have suggested that Donald Trump would not have been elected president were it not for the influence of fake news (Parkinson, 2016; Read, 2016; Dewey,  2016).
    
    Facebook also has a problem detecting and censoring/banning hate speech and bullying. This often happens against minority groups, since Facebook is a public platform wherein anyone can post. Several users have expressed hate speech against PWDs in their posts and comments. In January 2019, the #BoyetChallenge was launched by many users, mimicking Boyet, one of the main characters in GMA’s show “My Special Tatay” who has autism. Following that in March, a local coffee kiosk which employs PWDs and out-of-school youth posted a screenshot of messages from a customer complaining about their deaf and blind staff. The following month, some users commented offensive statements on the profile picture of PWD, which was his graduation picture. They made fun of his disability and had very uncalled for attacks. Even prominent figures with widely followed Facebook accounts are not properly monitored by your team. Present Deputy Executive Director of the Overseas Workers Welfare Administration (OWWA) and former Palace Communications Assistant Secretary Mocha Uson and her friend Drew Olivar posted a video online with the both of them mocking sign language. (Madarang, 2019). Aside from violating your community standards, acts like are unlawful according to Republic Act 7277 or the Magna Carta for Disabled Persons. Based on the amended version or Republic Act 9442, public ridicule includes “making mockery of a person with disability whether in oral or in writing.” (ncda.gov).
    
    Your team has also committed a violation against us, your users, by blindsiding us in our patronage of your site. We therefore now feel unsafe in the very platform wherein you wanted people to build genuine connections. A whistleblower revealed that you had given the private information of more than 50 million Facebook users to the political consultancy Cambridge Analytica. This blatantly goes against the 2012 consent decree you had signed stemming from a previous FTC investigation into privacy concerns to better protect user privacy (Paul, 2019). Due to this, several users have already lost their trust in Facebook, deactivating their accounts or not posting as much as they used to. We now have a Facebook community not only wary of malicious users, but also of the very people who created this “connected community”.
    
    If you are not able to enforce stricter policies, there will be a feeling of mistrust and skepticism in your users. The presence of fake news producers in a platform has several possible social costs. First, consumers who mistake a fake outlet for a legitimate one have less-accurate beliefs are misinformed. Second, users may also become more skeptical of news producers who put out legitimate news, as they become hard to distinguish from fake news. Third, it a reduces demand for high-precision, low-bias reporting, in turn reducing the incentives to invest in accurate reporting and truthfully report signals. These negative effects are not worth the entertainment value those who do like reading fake news get (Alcott & Gentzkow, 2017). Additionally, on a national level, these fake news manipulate the unknowing public, undermining the ability of the democratic process to select quality candidates. The victory of presidential candidate Donald Trump is a manifestation of this.
    
    For hate speech and bullying, this act targets your disadvantaged users, and puts them further in the peripheries. It makes them feel like even social media is unsafe for them, and affects their perception of themselves. It severely cuts down their self-esteem, and makes them ashamed of the public. In most cases, PWDs choose not to pursue an education or are forced to drop out of school because of the bullying they experience in school and online. I would like to to hear about my cousin’s story. She is diagnosed with bipolar disorder, and was already going to graduate from high school. However, due to mean Facebook posts and comments directed to people with mental disabilities and her specifically, she was forced to drop out of school so she would not come in contact with her bullies again. She was supposed to graduate that year.
    
    In order to improve Facebook, I would suggest enforcing stricter policies and covering a larger area of monitoring. I also recommend fundamental structural reforms suggested by Senator Mark R Warner of Virginia, Given Facebook’s repeated privacy breaches. Facebook should have a checks-and-balances system even among its executive board. If possible, each country must have a government official monitoring the Facebook higherups, as to ensure that the site is not used for the team’s personal gain. A representative from the Senate Committee on Public Information and Mass Media or any counterpart to this would help with transparency. As for monitoring hate speech and fake news, Facebook should update its language base. It is usually comments and posts that are not in English that manage to slip under the radar. Facebook can already do translations, but they are still not accurate. Further development of the multi-language feature of Facebook could greatly help in better monitoring posts. 
    

    Thank you for your time and kind consideration.

    In good faith,
    Giland Lim

    REFERENCES:

    Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Social Media and Fake News in the 2016 Election, 31, 211–236. doi: 10.3386/w23089

    Dewey, C. (2016). “Facebook Fake-News Writer: ‘I Think Donald Trump is in the White House because of Me.’” Washington Post, November, 17. https://www.washingtonpost. com/news/the-intersect/wp/2016/11/17/ facebook-fake-news-writer-i-think-donald-trump-is- in-the-white-house-because-of-me/.

    Graham-Harrison, E., & Cadwalladr, C. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. Retrieved October 14, 2019, from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election.

    Madarang, C. R. S. (2019, April 2). Differently abled student achieves dream of graduating despite being mocked. Retrieved October 14, 2019, from http://www.interaksyon.com/trends-spotlights/2019/04/02/146679/concerned-filipinos-hit-back-at-people-mocking-differently-abled-students-graduation-photo/.

    Parkinson, H. (2016). “Click and Elect: How Fake News Helped Donald Trump Win a Real Election.” Guardian, November 14

    RA 9442 – An Act Amending Republic Act No. 7277, Otherwise Known As The “Magna Carta For Disabled Persons, And For Other Purposes”. (n.d.). Retrieved October 14, 2019, from https://www.ncda.gov.ph/disability-laws/republic-acts/republic-act-9442/.

    Read, M. (2016). “Donald Trump Won because of Facebook.” New York Magazine, November 9.

    Wingfield, N., Isaac, M., & Benner, K. (2016, November 14). Google and Facebook Take Aim at Fake News Sites. The New York TImes. Retrieved from https://www.mediapicking.com/medias/files_medias/nytimes—google-and-facebook-take-aim-at-fake-news-sites-0237488001479491012.pdf

    Wong, J. C. (2019, July 12). Facebook to be fined $5bn for Cambridge Analytica privacy violations – reports. Retrieved October 14, 2019, from https://www.theguardian.com/technology/2019/jul/12/facebook-fine-ftc-privacy-violations.

  6. Dear Mr. Mark Zuckerberg:

    Greetings.
    
    I am writing to you to inform you that the steps Facebook is currently taking to enforce its policies and community standards to safeguard its users is not sufficient. Several abusive and offensive posts were posted undetected, some of which are still up until now. Users are violating many of the policies which are supposed to keep the Facebook community safe, and even the Facebook team themselves are violating their users’ right to privacy. Among these violations that still circulate are fake news, hate speech, bullying, and privacy breach. 
    
    Fake news on the Internet, have been more widely shared on your platform than other social media platforms (Silverman, 2016). Sadly, many users who see fake news articles claim that to believe them (Silverman & Singer-Vine, 2016). Putting these facts together, fake news plays a powerful and influential role in the digital realm, affecting even national events. It even played a big part during the 2016 USA Presidential Campaign. Facebook has been caught in the center of the issue, being accused of swinging some voters in favor of then-candidate Donald Trump through allowing misleading and outright wrong stories to be spread on the platform (Wingfield et al., 2016). These included a report from wtoe5news.com with the headline “FBI agent suspected in Hillary email leaks found dead in apparent murder-suicide.” and that Pope Francis had personally endorsed Trump’s campaign (Alcott & Gentzkow, 2017). Experts and analytics have suggested that Donald Trump would not have been elected president were it not for the influence of fake news (Parkinson, 2016; Read, 2016; Dewey,  2016).
    
    Facebook also has a problem detecting and censoring/banning hate speech and bullying. This often happens against minority groups, since Facebook is a public platform wherein anyone can post. Several users have expressed hate speech against PWDs in their posts and comments. In January 2019, the #BoyetChallenge was launched by many users, mimicking Boyet, one of the main characters in GMA’s show “My Special Tatay” who has autism. Following that in March, a local coffee kiosk which employs PWDs and out-of-school youth posted a screenshot of messages from a customer complaining about their deaf and blind staff. The following month, some users commented offensive statements on the profile picture of PWD, which was his graduation picture. They made fun of his disability and had very uncalled for attacks. Even prominent figures with widely followed Facebook accounts are not properly monitored by your team. Present Deputy Executive Director of the Overseas Workers Welfare Administration (OWWA) and former Palace Communications Assistant Secretary Mocha Uson and her friend Drew Olivar posted a video online with the both of them mocking sign language. (Madarang, 2019). Aside from violating your community standards, acts like are unlawful according to Republic Act 7277 or the Magna Carta for Disabled Persons. Based on the amended version or Republic Act 9442, public ridicule includes “making mockery of a person with disability whether in oral or in writing.” (ncda.gov).
    
    Your team has also committed a violation against us, your users, by blindsiding us in our patronage of your site. We therefore now feel unsafe in the very platform wherein you wanted people to build genuine connections. A whistleblower revealed that you had given the private information of more than 50 million Facebook users to the political consultancy Cambridge Analytica. This blatantly goes against the 2012 consent decree you had signed stemming from a previous FTC investigation into privacy concerns to better protect user privacy (Paul, 2019). Due to this, several users have already lost their trust in Facebook, deactivating their accounts or not posting as much as they used to. We now have a Facebook community not only wary of malicious users, but also of the very people who created this “connected community”.
    
    If you are not able to enforce stricter policies, there will be a feeling of mistrust and skepticism in your users. The presence of fake news producers in a platform has several possible social costs. First, consumers who mistake a fake outlet for a legitimate one have less-accurate beliefs are misinformed. Second, users may also become more skeptical of news producers who put out legitimate news, as they become hard to distinguish from fake news. Third, it a reduces demand for high-precision, low-bias reporting, in turn reducing the incentives to invest in accurate reporting and truthfully report signals. These negative effects are not worth the entertainment value those who do like reading fake news get (Alcott & Gentzkow, 2017). Additionally, on a national level, these fake news manipulate the unknowing public, undermining the ability of the democratic process to select quality candidates. The victory of presidential candidate Donald Trump is a manifestation of this.
    
    For hate speech and bullying, this act targets your disadvantaged users, and puts them further in the peripheries. It makes them feel like even social media is unsafe for them, and affects their perception of themselves. It severely cuts down their self-esteem, and makes them ashamed of the public. In most cases, PWDs choose not to pursue an education or are forced to drop out of school because of the bullying they experience in school and online. I would like to to hear about my cousin’s story. She is diagnosed with bipolar disorder, and was already going to graduate from high school. However, due to mean Facebook posts and comments directed to people with mental disabilities and her specifically, she was forced to drop out of school so she would not come in contact with her bullies again. She was supposed to graduate that year.
    
    In order to improve Facebook, I would suggest enforcing stricter policies and covering a larger area of monitoring. I also recommend fundamental structural reforms suggested by Senator Mark R Warner of Virginia, Given Facebook’s repeated privacy breaches. Facebook should have a checks-and-balances system even among its executive board. If possible, each country must have a government official monitoring the Facebook higherups, as to ensure that the site is not used for the team’s personal gain. A representative from the Senate Committee on Public Information and Mass Media or any counterpart to this would help with transparency. As for monitoring hate speech and fake news, Facebook should update its language base. It is usually comments and posts that are not in English that manage to slip under the radar. Facebook can already do translations, but they are still not accurate. Further development of the multi-language feature of Facebook could greatly help in better monitoring posts. 
    

    Thank you for your time and kind consideration.

    In good faith,
    Giland Lim

    REFERENCES:

    Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Social Media and Fake News in the 2016 Election, 31, 211–236. doi: 10.3386/w23089

    Dewey, C. (2016). “Facebook Fake-News Writer: ‘I Think Donald Trump is in the White House because of Me.’” Washington Post, November, 17. https://www.washingtonpost. com/news/the-intersect/wp/2016/11/17/ facebook-fake-news-writer-i-think-donald-trump-is- in-the-white-house-because-of-me/.

    Graham-Harrison, E., & Cadwalladr, C. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. Retrieved October 14, 2019, from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election.

    Madarang, C. R. S. (2019, April 2). Differently abled student achieves dream of graduating despite being mocked. Retrieved October 14, 2019, from http://www.interaksyon.com/trends-spotlights/2019/04/02/146679/concerned-filipinos-hit-back-at-people-mocking-differently-abled-students-graduation-photo/.

    Parkinson, H. (2016). “Click and Elect: How Fake News Helped Donald Trump Win a Real Election.” Guardian, November 14

    RA 9442 – An Act Amending Republic Act No. 7277, Otherwise Known As The “Magna Carta For Disabled Persons, And For Other Purposes”. (n.d.). Retrieved October 14, 2019, from https://www.ncda.gov.ph/disability-laws/republic-acts/republic-act-9442/.

    Read, M. (2016). “Donald Trump Won because of Facebook.” New York Magazine, November 9.

    Wingfield, N., Isaac, M., & Benner, K. (2016, November 14). Google and Facebook Take Aim at Fake News Sites. The New York TImes. Retrieved from https://www.mediapicking.com/medias/files_medias/nytimes—google-and-facebook-take-aim-at-fake-news-sites-0237488001479491012.pdf

    Wong, J. C. (2019, July 12). Facebook to be fined $5bn for Cambridge Analytica privacy violations – reports. Retrieved October 14, 2019, from https://www.theguardian.com/technology/2019/jul/12/facebook-fine-ftc-privacy-violations.

  7. Thank you for your comment ma’am, here is my revised submission for the midterms.

    Dear Jack Dorsey,

    This letter is meant to call your attention regarding the implementation of Twitter’s community guidelines.

    The unprecedented influence of Twitter as a social media platform has helped connect users all around the world––it has also provided them with a voice online. Whether or not they use this “voice” responsibly or not, it is your mandate as CEO to ensure that Twitter’s rules and policies are properly enforced. In this digital age, one’s words online can greatly affect people at a personal level; this can also reach members of a community at a national level. It boils down to the argument that a simple tweet can have socio political impacts to our world. Twitter’s rules and policies are meant to make sure that these aid in creating positive and hopefully, inspiring contributions to our respective communities. This requires a taxing surveillance of tweets in order to achieve this vision. However, there are some users that still manage to get away with violating Twitter’s guidelines.

    A solid example for this is a seemingly harmless tweet posted by Priyanka Chopra on February 27, 2019. She said “Jai Hind #IndianArmedForces” and quickly received backlash online. In English, Jai Hind translates to “Victory to India.” To contextualize this issue, India and Pakistan have a long-shared history of conflict. Over thousands of lives were killed over the years as a result of military interference. Chopra’s tweet went out soon after the Balakot strike that occurred in the early hours of the morning on February 26, 2019. Her hashtag seemed to support her country’s military’s airstrike. The Indian government claimed that there were no casualties. However, some western sources argued that this statement is false and the number of people injured and killed over 50 lives.

    Chopra is in violation of the glorification of Twitter’s violence policy wherein it is forbidden to glorify, praise, condone, or celebrate violent acts committed by civilians that resulted in death or serious physical injury. She is an esteemed public figure––actress, philanthropist, and human rights activist. Take not that her Twitter account is still verified. Her advocacy for peace has earned her recognition as a UN ambassador in 2010. Therefore, people hold her to a higher standard in this situation. As an effect, critics called to have Chopra removed as UN ambassador. Even Pakistan filed an official complaint with the United Nations for fuelling tensions between the two nuclear-armed countries. Younger users even started calling her a hypocrite for praising military action in such a public platform. Despite the amount of flak, Chopra did not delete her tweet and Twitter failed to flag this post down.

    It is true that one cannot control the response of posts on social media. People are more informed, aware, and sensitive to social issues; therefore, Twitter’s policies should also adjust to these changing times. I propose to you that there be separate management team in-charge of verified accounts specifically. Given your position as CEO, I understand that micromanaging everything would be nearly impossible. The organization would require a point person or director, if feasible, that can oversee the activities of verified accounts. As I have mentioned, online influencers and public figures should be held to a higher standard because they have a large following––everything that they post gains numerous amounts of likes, retweets, and replies. I also suggest that verified accounts should lose their verification if they violate any of these community guidelines. The committee/s assigned to screen and/or monitor tweets should also recognize that Chopra’s tweet should be taken down as soon as possible. It is a direct violation of the glorification of violence policy of Twitter.

    I hope that my suggestions in this letter regarding the organization’s management will sway your decision to reconsider Chopra’s verification status, and to better enforce the rules and policies of Twitter.

    Thank you.

    Sources:
    Abi-habib, M., & Ramzy, A. (2019, February 26). Indian Jets Strike in Pakistan in Revenge for Kashmir Attack. Retrieved October 14, 2019, from https://www.nytimes.com/2019/02/25/world/asia/india-pakistan-kashmir-jets.html.

    Al Jazeera. (2019, August 21). Pakistan asks UN to remove Priyanka Chopra as goodwill ambassador. Retrieved October 14, 2019, from https://www.aljazeera.com/news/2019/08/pakistan-asks-remove-priyanka-chopra-goodwill-ambassador-190821081840304.html.

    Glorification of violence policy. (2019, March). Retrieved October 14, 2019, from https://help.twitter.com/en/rules-and-policies/glorification-of-violence.

    Mailonline, W. C. F. (2019, March 1). Priyanka Chopra accused of ‘glorifying war’ in tweet supporting for India in stand-off with Pakistan. Retrieved October 14, 2019, from https://www.dailymail.co.uk/news/article-6760995/Priyanka-Chopra-accused-glorifying-war-tweet-supporting-India-stand-Pakistan.html.

  8. Dear Mr. Mark Zuckerberg:

    Good day.

    I am writing this letter to raise some points about Facebook’s current safety guidelines on moderating Facebook community groups.

    I recognize your past efforts in changing Facebook’s community standards in managing and monitoring Facebook groups but as of now, we are yet to see the effectivity of such reconditions with the presence of numerous harmful facebook groups. Just recently, a Facebook group solely for current and former border patrol agents has been discovered by journalists (Thompson, 2019). In the said Facebook group, border patrol agents were found to be making racist slurrs, specifically against latinos. A discussion regarding a 16-year-old Guatemalan migrant who died under the custody of Texas’ border patrol has also been one of the posts that has concerned me the most as the members of the group left careless comments such as a GIF with the quotes “oh well” and “if he dies, he dies”. Just from these, it can be seen how Facebook groups can be used as a platform for people to spread hate and evoke racism among its audiences. Whether groups are made public or not, it is still bothersome, especially, in this case, to Latinos’ safety, to know that such groups exists as we do not know to what extent these online activities may lead to in the real world.

    According to Alison (2019), in monitoring activities of Facebook groups, you rely heavily on AI and machine learning in detecting bad activities and offensive content. Although the use of artificial intelligence is indeed more effective in managing computer systems, I recommend that you still consider, in addition, hiring more real people in detecting offensive and/or harmful groups as there are times that bots are not able to recognize certain online slang or language.

    Also, I acknowledge that you have simplified Facebook’s group settings from public, private, or secret, to just strictly private and public (Lee, 2019) however, I think that these reforms should not only be focused on Facebook’s group settings but, instead, also be focused on identifying consequences for offensive users who are active members of harmful Facebook group. I recommend that you take stricter and quicker actions in taking down not only the group itself but also the accounts of its members.

    I hope that this letter may help you further develop Facebook’s safety guidelines for its users. Thank you for you consideration

    References:

    Alison, T (14 August 2019). How do we help keep private groups safe? Retrieved from https://newsroom.fb.com/news/2019/08/private-groups-safety/

    Lee,D. (14 August 2019). Facebook is simplifying group privacy settings and adding admin tools for safety. Retrieved from https://www.theverge.com/2019/8/14/20805928/facebook-closed-secret-public-private-group-settings

    Thompson, A. (1 July 2019). Inside the Secret Border Patrol Facebook Group Where Agents Joke About Migrant Deaths and Post Sexist Memes. Retrieved from https://www.propublica.org/article/secret-border-patrol-facebook-group-agents-joke-about-migrant-deaths-post-sexist-memes

  9. Dear Mr. Jack Dorsey,

    While Twitter has a policy in place for content dealing with terrorism and violent extremism, I believe that the current enforcement of this policy is lacking, specifically with regards to alt-right and white supremacist content.

    Twitter has an algorithm that proactively flags and bands content from groups such as Al-Qaeda and ISIS. Why does this not do the same for white nationalist and white supremacist content? In a Vice article, a Twitter employee reportedly stated that the algorithm, when turned on white supremacist content, also flagged content from Republican politicians (Cox & Koebler, 2019). The notion was then abandoned for the sake of maintaining political neutrality.

    An example of content from Republican politicians that invoke white supremacist rhetoric is a series of tweets by US President Donald Trump in July that attacked Democrat Representatives Ilhan Omar, Rashida Tlaib, Ayanna Pressley, and Alexandria Ocasio-Cortez, telling them to “go back and help fix the totally broken and crime infested places from which they came,” a statement that invokes anti-immigrant rhetoric (Resto-Montero, 2019). Neither Trump’s account nor these particular tweets have been taken down.

    By refusing to systematically ban white supremacist content at the risk of banning Republican politicians like Trump, white nationalist ideas and rhetoric continue to proliferate on Twitter. Such content inspires like minded individuals to pursue dangerous courses of action. Brenton Tarrant, the perpetrator of the Christchurch mosque shootings that killed 51 people, regularly posted and shared his anti-Muslim and anti-immigrant sentiments on his Twitter (McBride, 2019). He also described Donald Trump as a “symbol of white supremacy” in a manifesto he shared online shortly before the shooting (Yahoo News, 2019). Tarrant’s Twitter account was taken down only after the event.

    The consequences of being selective on the enforcement of your rules and regulations in this manner are dire. In line with this, I implore you to revisit your policy with regards to alt-right and white supremacist content on Twitter and utilize your regulatory algorithm on white supremacist and white nationalist—basically Nazi—content, even if it bans Republicans, even if it bans the president of the United States himself. Your conception of political neutrality must take a backseat to the safety of your user-base.

    I hope you take this letter into consideration.

    References:

    Cox, J., & Koebler, J. (2019, April 25). Twitter Won’t Treat White Supremacy Like ISIS Because It’d Have to Ban Some GOP Politicians Too. Retrieved from https://www.vice.com/en_us/article/a3xgq5/why-wont-twitter-treat-white-supremacy-like-isis-because-it-would-mean-banning-some-republican-politicians-too.

    McBride, J. (2019, March 17). Brenton Tarrant Social Media: Twitter Rants, Live Video. Retrieved from https://heavy.com/news/2019/03/brenton-tarrant-social-media-twitter-video/.

    Resto-Montero, G. (2019, July 14). Democrats defend congresswomen Trump says should “go back” to their “corrupt” countries. Retrieved from https://www.vox.com/2019/7/14/20693758/donald-trump-tweets-racist-xenophobic-aoc-omar-tlaib-pressley-back-countries.

    Twitter. (n.d.). Terrorism and violent extremism policy. Retrieved from https://help.twitter.com/en/rules-and-policies/violent-groups.

    Yahoo News. (2019, March 15). Brenton Tarrant: Christchurch shootings suspect said Trump is ‘symbol of white supremacy’. Retrieved from https://news.yahoo.com/new-zealand-shootings-racist-gunman-brenton-tarrant-inspired-anders-breivik-wanted-revenge-104941343.html.

  10. Dear Jack Dorsey,

    A kind and pleasant day for you. The 330 million active Twitter users per month (Twitter, as cited in Lin, 2019), however, may not be experiencing kindness and pleasantness when using the application that you founded. The rules and standards that are being implemented by Twitter may be centered in a Western context – the detection of harmful remarks and bullying are focused on those that are posted in the English language. Whereas the Philippines, my Filipino-speaking country, ranked 10th in 2012 when it comes to the number of Twitter users (Semiocast, as cited in Montecillo, 2012). For other countries who also do not use English as their main language, the proliferation of hateful comments may still continue.

    Twitter can be a hostile place. According to a study, an estimate of 15,000 bullying tweets is posted every day, which makes it 100,000 harsh statements each week (Fitzgerald, 2012). Take note – these are only the detected ones or those posted in obvious language like the use of “punch”, “kick”, or “bully”. These numbers do not improve, as majority or 59% of teens said they have been bullied or harassed online, according to a study released by Pew in September 2018 (Wellemeyer, 2019). In addition, spam accounts crawl freely on Twitter, which often use language geared towards degrading, insulting, and threatening other people (Chatzakou, et. al, 2017). A study in 2016 (Tian,2016) also proved that Twitter is a breeding ground for negativity. A negative tweet will more likely to be retweeted and shared compared to the positive ones. That means Twitter is now becoming a platform for cyberbullying.

    Bullying has been becoming a more prominent problem. It is not just done physically. Bullies can now throw punches and break their victims by just the blow of words, posted on different social media sites like Twitter, with the intention to hurt or embarrass (Sanchez & Kumar, 2011). Since these pointy remarks are not effectively eradicated by Twitter, the insults will stay forever online. The victim cannot escape. The victim cannot forget (Dug, 2018). The mean and hurtful comments can cause anxiety, depression, and loneliness to the victim (Campbell, 2017). In some cases, these can also lead to suicide (Sanchez & Kumar, 2011; Campbell, 2017).

    With all of these being said, it is essential to form and maintain a peaceful community, free of hateful comments and bullying. For this to come to fruition, you must expand your bullying-detection system by adding other languages or emojis to your hateful remark detectors. This includes the context and culture of each country, including what might be offensive and construe as bullying for other nationalities. Twitter should hire experts on language and culture for each country to aid in bullying detection. Furthermore, as stated above, the tweets getting more attention should be looked into, as they might be the ones promoting negativity. Lastly, Twitter should incorporate verification steps in creating accounts to lessen bots and multiple accounts. This includes CAPTCHA or allowing only one account per mobile number.

    For social justice and peace,
    Eugene.

    Citations:

    Campbell, M. (2017). Facebook and Twitter ‘harm young people’s mental health’. Retrieved from https://www.theguardian.com/society/2017/may/19/popular-social-media-sites-harm-young-peoples-mental-health

    Chatzakou, Despoina & Kourtellis, Nicolas & Blackburn, Jeremy & De Cristofaro, Emiliano & Stringhini, Gianluca & Vakali, Athena. (2017). Mean Birds: Detecting Aggression and Bullying on Twitter. 13-22. 10.1145/3091478.3091487.

    Dug, W. (2018). Twitter is a platform for bullies – that’s why I quit. Retrieved fromhttps://www.thenational.scot/comment/columnists/16315855.twitter-is-a-platform-for-bullies-thats-why-i-quit/

    Fitzgerad, B. (2012). Bullying On Twitter: Researchers Find 15,000 Bully-Related Tweets Sent Daily (STUDY). Retrieved from https://www.huffpost.com/entry/bullying-on-twitter_n_1732952.

    Lin, Y. (2019). 10 Twitter Statistics Every Marketer Should Know in 2019 [Infographic]. Retrieved from https://www.oberlo.com/blog/twitter-statistics#targetText=Number%20of%20Twitter%20Users,-One%20of%20the&targetText=Twitter%20boasts%20330%20million%20monthly,basis%20(Twitter%2C%202019).

    Montecillo, P. (2012). Philippines has 9.5M Twitter users, ranks 10th. Retrieved from https://technology.inquirer.net/15189/philippines-has-9-5m-twitter-users-ranks-10th.

    Tian, X. (2016). Investigating Cyberbullying in Social Media: The case of Twitter. KSU Proceedings on Cybersecurity Education, Research and Practice. 4. https://pdfs.semanticscholar.org/8a20/b023d8c19befecb11e15a3296dcc2ce91de4.pdf

    Sanchez, H., & Kumar, S.T. (2011). Bullying Detection. Retrieved from https://users.soe.ucsc.edu/~shreyask/ism245-rpt.pdf

    Wellemeyer, J. (2019). Instagram, Facebook and Twitter struggle to contain the epidemic in online bullying. Retrieved from https://www.marketwatch.com/story/why-it-may-be-too-late-for-instagram-facebook-and-twitter-to-contain-the-epidemic-in-online-bullying-2019-07-15

  11. Dear Mr. Mark Zuckerburg:

    In this political climate, it has been not unknown to many that now, more than ever, we are very much prone to disinformation or what they pertain to as “fake news.” This letter is meant to address the current problem, how the company has been combatting it recently and how your regulations still fail to fight against this.

    In 2017, one of the executives of Facebook and now the head of Instagram, Adam Mosseri, posted an article on Facebook about how the company promises to provide accurate information to its users. In this article, he emphasized that the company is willing to hire third-party apps to better evaluate the posts in the said website. These third-party fact-checking organizations have the ability to report to Facebook some content that they deem false or harmful. Mosseri also promised to update their detection of fake user accounts, that are very much known in spamming. Also, it is stated in the article that flagging potentially false content is just a few clicks away from the users (Mosseri, 2017).

    However, a couple of years after, the same problems of misinformation and spreading of fake news still persist. Just this Septemnber 2019, research companies have released that there are around 32 million links “that included data about whether users labeled millions of posts as fake news, spam or hate speech, or if fact-check organizations raised doubts about the posts’ accuracy” (Alba, 2019). Meaning, Facebook and its fact-checking organizations have been failing massively in combatting these attacks. This poses a huge problem in regulating what the users see and might perceive as accuarate or not.

    I would also like to point out that there was one time this year when Facebook blatantly refused to remove content that misinform its users. In May 2019, Nancy Pelosi, an American politician was involved in this issue. Footage of Pelosi has beed deliberately slowed down to give an impression that she is drunk or unwell. This video was spread with the help of Trump supporters (Waterson, 2019). This is a clear manifestation of disinformation as this video might mislead people into thinking that Pelosi really is unwell and mentally ill.

    However, Facebook refused to take down this video. In their statement, they deliberately say, “There is also a fine line between false news and satire or opinion. For these reasons, we don’t remove false news from Facebook but instead, significantly reduce its distribution by showing it lower in the News Feed.” This means that they are not removing this false content which is clearly against their earlier mandate to remove fake news completely from their system (Healey, 2019).

    Mr. Zuckerburg, I really do hope that you improve your mechanisms in combatting fake news in the website. Facebook is really popular nowadays, and with its billions of users, it is safe to day that you have all the responsibility to engage the users in a safer and completely guided platform. I hope that a day will come that fake news and disinformation will be fought against triumphantly, and that it will be something that we can rarely see on our news feeds.

    Respectfully yours,
    Rudolf Rafael Quimson

    References:
    Alba, D. (2019). Ahead of 2020, Facebook falls short on plan to share data on disinformation. The New York Times. Retrieved from https://www.nytimes.com/2019/09/29/technology/facebook-disinformation.html

    Healey, J. (2019). Facebook’s response to the doctored Nancy Pelosi video was as spineless as ever. Los Angeles Times. Retrieved from https://www.latimes.com/opinion/enterthefray/la-ol-facebook-doctored-nancy-pelosi-video-20190524-story.html

    Mosseri, A. (2017). Working to stop misinfromation and fake news. Retrieved from https://www.facebook.com/facebookmedia/blog/working-to-stop-misinformation-and-false-news

    Waterson, J. (2019). Facebook refuses to delete fake Pelosi video spread by Trump supporters. The Guardian. Retrieved from https://www.theguardian.com/technology/2019/may/24/facebook-leaves-fake-nancy-pelosi-video-on-site

  12. Dear Mr. Jack Dorsey,

    I am writing to you to inform you of what Twitter has become. Your platform has majorly contributed to the global evolution of information consumption over the past decade. Having developed a renewed management model and policy for content creation on Twitter, it seems that more and more Tweets wreak havoc than ensure a safe community among your users. Given the increased presence of politicians in the landscape of social media, the divide between the average Twitter user and his or her familiarity, or even knowledge regarding current politics becomes much narrower than a decade ago. The arbiter of truth in the information presented to us points directly to that who is in power or authority, with journalism being tarnished – more or less becoming a political ordeal and straying away from truth. Consequently, a spiral of fear and confusion through conspiracies makes its way into each user. What this ultimately brings about is a lack of safeguarding in terms of the well-being of your users. Not only is their acquisition of knowledge harmed, but also their sense of safety and security which is supposedly at the forefront of your platform’s philosophy.

    Sparking a number of debates regarding this is your refusal to remove current US President Donald Trump and his Tweets from your platform, for it apparently does not go against your enforcement philosophy, that is allowing behaviors of so-called ‘legitimate public interest,’ and exempting ‘military or government entities’ from your guidelines on violence and abuse. Far-right political subculture research affiliate Becca Lewis argued Trump making his Twitter account “a powerful propaganda tool for some of these far-right movements” (qtd. in Ohlheiser, The dangerous cycle par. 9). Actual news is published, and the conspiracy spiral spins faster each activity he does on your platform, adding to the already massive amount of online misinformation and generally inappropriate content in terms of targeting and harassment – to name a few: his message to Iranian President Rouhani, his ‘Nuclear Button,’ and how he blatantly says North Korea ‘won’t be around much longer’ (Ohlheiser, Kamala Harris par. 2, 8).

    What all of this boils down to is a global – yes, global – risk of digital misinformation according to the World Economic Forum (WEF) – a misuse of the information presented to us at the cost of truth, characterized by digital wildfires ensuing an insecurity with sources we come across and daring each to acquire literacy when consuming information online. Such misinformation appeals to a major stream of beliefs at the cost of society’s collective judgment in a sociopolitical context, with the outcome of these misinformed decisions being against the development of an individual (Lewandowsky et al. 107-8). Basically, misinformation especially from politicians equals misinformed thinking, bad decisions, and a messed up life for the future generation. Not only is the essence of journalism and truth in information under serious attack, but also the common people who consume it. The current age puts journalists themselves as both producers and objects of discourse, victims of political violence and censorship (Carlson 1882-3). Yes, this is what has become of your platform.

    With all of this in mind, I immediately recommend a remodeling of your existing guidelines, such that strictly and absolutely ensures behaviors are of legitimate public interest and does not give ‘unmerited’ priority to military or government entities so as to prevent this global conflict from developing into a cancer of not only Twitter users but of humanity.

    I have always admired how your platform is able to breed a generation of critical thinkers, being open to real-life conversations and discourses by just pressing a key or tapping on a glass panel, but perhaps a step back from all of this would be wise, seeing where things can be adjusted or altered. In this day and age characterized by digital misinformation wildfires, a question worth thinking and acting on, Mr. Dorsey, “Who /really/ is the arbiter of truth?”

    With anticipation,
    Miggy

    Works Cited:

    Carlson, Matt. “The Information Politics of Journalism in a Post-Truth Age.” Journalism Studies 19.13 (2018): 1879-1888. Taylor & Francis Group. Web. 14 Oct. 2019.

    Lewandowsky, Levan et al. “Misinformation and Its Correction: Continued Influence and Successful Debiasing.” Association for Psychological Science 13.3 (2012): 106-131. SAGE. Web. 14 Oct. 2019.

    Ohlheiser, Abby. “Kamala Harris wants Trump suspended from Twitter for ‘harassment.’ These 3 loopholes protect him.” The Washington Post. The Washington Post, 3 October. 2019. Web. 14 Oct. 2019.

    –. “The dangerous cycle that keeps conspiracy theories in the news – and Trump’s tweets.” The Washington Post. The Washington Post, 12 August. 2019. Web. 14 Oct. 2019.

  13. Dear Mr. Jack Dorsey,

    Twitter has become a global phenomenon indeed bringing countries to discourse about issues. Although the question whether these discussions are beneficial to those involved especially when it comes to the issue on mental health and sexuality. The rules indicate that the community cannot post sexual contents although existing twitter communities called “alter” are thriving. There are hashtags for these communities which makes some of those participating in this activity are very easy to find and are almost connected to each other through followers and the likes. Some are even minors who are able to post photos that are sexual in content. There were 18.4 million reports of child sexual abuse images all over the internet in 2018 according to the New York Times. The safety standards and enforcement of the rules and policies are not enough to safeguard these minors who are still discovering the right and the wrong.

    This alter community also includes sex workers who sell their content to those interested. While legalizing sex work and not shaming acceptance of one’s body are different discussions, this letter would like to emphasize on how Twitter needs to reevaluate enforcement of policies if communities violating its rules are easily overlooked. Most of the women who are in the community in order to either explore their sexuality and express their love for their bodies in a supposedly safe place are often harassed by men. Given the persistence of this kind of culture where harassing a person is tolerated, we could only imagine how the minors are being treated. In turn it may be scarring to these minors or it may instill in their head that this culture of toxic masculinity is okay. This should never be the case for any platform. If Twitter does not countercheck the age of users, it seems to put up a lousy structure of the platform that in turn lets the culture of harassment propagate through negligence.

    Hiring more experts and people to countercheck the content posted on the users’ accounts is one way to lessen this kind of activity. Possibly, Twitter may become aggressive in this issue by hosting threads of discourse for child sexual exploitation and the likes. One may also be required to submit valid documents indicating the age or the year of birth when suspected of sexual activities, this should only be when the account is reported in order to not violate the right to privacy. Although there have been efforts by the company to take these accounts down, it has been said that it was only done by technology. There have been no real debriefings for these account holders and again it only makes these blocked accounts’ users able to create a new account (Smith, 2018).

    Much like the world we live in, the platform cannot be perfect and would always be needing to evolve. Maybe it is time to visit our practices as a community so that it would reflect on the online footprints we leave for the next generation and that it may manifest in our real lives as well.

    With my utmost concern for the children,
    EJ

    Dance, G. & Keller, M. (2019) “The Internet Is Overrun With Images of Child Sexual Abuse. What Went Wrong?” Retrieved from https://www.nytimes.com/interactive/2019/09/28/us/child-sex-abuse.html

    Smith, E. (2018) “Twitter takes down half a million accounts over child sexual exploitation fears in just half a year” Retrieved from https://www.dailymail.co.uk/news/article-6494657/Twitter-takes-half-million-accounts-child-sexual-exploitation-fears-just-half-year.html

    Amnesty International. (2018) “Toxic Twitter – A Toxic Place for Women” Retrieved from https://www.amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-1/

  14. Jack Dorsey
    Chief Executive Officer
    Twitter

    Dearest Sir Dorsey

    I am writing to your high office to express my concern about your company’s policy with regard to combatting disinformation (a.k.a Fake News).

    First and foremost, it is applaudable that your company recognizes the potential of your platform – Twitter – to be “a powerful antidote to the intentional spread of false information”, and that you are “taking proactive steps to stop abuse, spam, and manipulation before they happen”. A formidable feat your company has accomplished in line with said vision is that your company was able to take down thousands of fake news account worldwide as of September 2019 (France-Presse, 2019).

    However, as circumstances may dictate, more concrete actions must be undertook, as there is still false information that may have alluded your technology’s security and have imposed harm on people. A good example is the case of Ethan Lindenberger. Ethan’s parents became victims of false information about vaccinations, that thus put Ethan’s and his siblings’ life in danger as they were left exposed to diseases like measles. Ethan became a victim of a disinformation campaign also in the sense that, suspiciously, his own pastor told him to not to go to church for his own protection (Lapowsky, 2019).

    The challenge is further amplified by the fact that many actual (and not bots) Twitter users do not really question nor critique the content they view and they actively partake in the spread of false information by liking and/or retweeting (Our Social Times, 2018). A study at Massachusetts Institute of Technology cites that false information spread faster than real and actual news in Twitter, and actual Twitter users are the greater contributors to this dilemma compared to bots and fake accounts (The Daily Beast, 2018).

    As such, improvements in the security systems are at hand. Your company’s artificial intelligence must continuously undergo recalibration to be able to filter the spread of information. Also, the focus should not just be centered on potential fake accounts but on actual contents and tweets as actual Twitter users are found to be contributors also to the spread of fake news. Continuous collaboration with researchers and experts is also suggested as to implore humanistic intelligence as to be able to address lapses artificial intelligence cannot address.

    In parting, I hope your company takes the above-mentioned recommendations for consideration. Let us all work together in making our Twitter spaces safe and free from false information.

    In service of the people,

    Leandro Rafael Purisima
    Student
    University of the Philippines

    REFERENCES:

    Elections integrity. (n.d.). Retrieved October 14, 2019, from https://about.twitter.com/en_us/values/elections-integrity.html.

    Fewer than 10% of Twitter users question fake news. (2018, June 11). Retrieved October 14, 2019, from https://oursocialtimes.com/fake-news/.

    France-Presse, A. (2019, September 20). Twitter closes thousands of fake news accounts worldwide. Retrieved October 14, 2019, from https://technology.inquirer.net/90793/twitter-closes-thousands-of-fake-news-accounts-worldwide.

    Lapowsky, I. (2019, May 9). ‘Fake News Victims’ Meet With Twitter and Facebook. Retrieved October 14, 2019, from https://www.wired.com/story/fake-news-victims-meet-twitter-facebook/.

    The Daily Beast. (2018, March 9). Study: Fake News Spreads Faster Than Real News on Twitter. Retrieved October 14, 2019, from https://www.thedailybeast.com/study-fake-news-spreads-faster-than-real-news-on-twitter.

  15. Dear Mr. Jack Dorsey,

    I am writing to inform you of copyright infringement within the art community. Twitter is a platform where original artworks are shared and celebrated. Unfortunately, art theft has become an issue for content creators. I acknowledge your efforts to safeguard users’ intellectual property.

    However, I would like to point out a concern in your Copyright policy. It is stated that only the copyright holder has the right to file a DMCA (Digital Millennium Copyright Act) complaint. It is not addressed as to what actions Twitter will do should the offender turn his/her account private or block the complainant. This will unable the copyright holder to provide identification of the infringing material.

    This may possibly contain a loophole for sharing as it happens in a private virtual setting. (Bauer, 2015) It bars the original content creator from accessing the material. I recommend that the Report Tweet should include a copyright infringement option. It will make the reporting easier and accessible to others, should the complainant be blocked.

    I hope you take these matters into consideration.



    Many thanks, 
Gerlin Bongato

    Reference:

    Bauer, I. (2015). When copyright and social media meet: Zooming in on current issues and cases. Retrieved from https://journals.flvc.org/FAU_UndergraduateLawJournal/article/view/84605/81630

  16. Dear Mr. Mark Zuckerberg,

    I am writing this letter to address my concern about on-going harassment and cyberbullying issues on Instagram. As the current owner of this social media platform, I wish that you may be able to listen to the users and help them have the safe space that Instagram had always promised, which in the words of Instagram’s former CEO Kevin Systrom — the nicest place on the darn internet.

    For the past years, despite of Instagram’s efforts to promote kindness and develop features that could help in lessening cyberbullying and harassment on the platform, it still continues to be the number one social media platform where harassment and cyberbullying is prevalent (Petrov, 2019). The number of victims continue to grow which seems contradictory to the #KindComments campaign of Instagram. According to interviews from former employees of Instagram, the campaign did not seem to be connected to what was really going on inside their company. Despite the growing issue, there were not enough people working on the project (Lorenz, 2018).

    I think the problem here is that raising awareness on the campaign is not enough. Instagram should focus more on developing technology that would easily detect cases of harassment and bullying that could avoid the negative repercussions to the victims. Not being able to attend to such issues can cause negative effects to people especially to the youth, who are the number one users of this platform.

    There have been many articles about interviews from the victims of such instances. Even with Instagrams policies against bullying and harassment, they try to flag and report their cases but they never got a response from the Instagram team, even up until today, when years have already passed. An example of this is Riley’s (a pseudonym) experience. She was 14-years old when she first experienced bullying on Instagram. She was on an American Dolls devoted account when she posted a pro-LGBT hashtag. Just for that, some people found out her cellphone number and started trolling her. She received threatening calls for days saying that they would find out where she lives and they would do stuff to her. The experience was too overwhelming that forced her to leave the platform and delete her account (Lorenz, 2018).

    I hope Instagram can do something about it. Bullying and harassment is not an easy experience to just get over with. It causes trauma and can affect one’s mental health for a long period of time. Not only on Instagram, but also on Facebook, since you own the company too. Studies have showed that Instagram and Facebook are the worst places where these things happen. In a research conducted by a British anti-bullying group, Instagram tops with 42% and Facebook follows with 37% (Gibbs, 2017). These two are not far from each other.

    With this, I hope that you would listen to my suggestion to focus more on the development of features and technology that would help in fighting cyberbullying and harassment. Instagram should also prioritize it above anything else if the group wants to stay true to your goals as a social media platform.

    Thank you and we look forward to the future of Instagram.

    References:

    Gibbs, C. (2017). Instagram is the Worst Social Network for Cyberbullying: Study. Daily News. Retrieved from https://www.nydailynews.com/life-style/instagram-worst-social-network-cyber-bullying-study-article-1.3339477

    Lorenz, T. (2018). Instagram Has A Massive Harassment Issue. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2018/10/instagram-has-massive-harassment-problem/572890/

    Petrov, C. (2019). Cyberbullying Statistics 2019. Retrieved from https://techjury.net/stats-about/cyberbullying/

  17. Dear Mr. Mark Zuckerberg:

            Greetings.
    
            I am writing to you to inform you that the steps Facebook is currently taking to enforce its policies and community standards to safeguard its users is not sufficient. Several abusive and offensive posts were posted undetected, some of which are still up until now. Users are violating many of the policies which are supposed to keep the Facebook community safe, and even the Facebook team themselves are violating their users’ right to privacy. Among these violations that still circulate are fake news, hate speech, bullying, and privacy breach. 
    
            Fake news on the Internet, have been more widely shared on your platform than other social media platforms (Silverman, 2016). Sadly, many users who see fake news articles claim that to believe them (Silverman & Singer-Vine, 2016). Putting these facts together, fake news plays a powerful and influential role in the digital realm, affecting even national events. It even played a big part during the 2016 USA Presidential Campaign. Facebook has been caught in the center of the issue, being accused of swinging some voters in favor of then-candidate Donald Trump through allowing misleading and outright wrong stories to be spread on the platform (Wingfield et al., 2016). These included a report from wtoe5news.com with the headline“FBI agent suspected in Hillary email leaks found dead in apparent murder-suicide.” and that Pope Francis had personally endorsed Trump’s campaign (Alcott & Gentzkow, 2017).Experts and analytics have suggested that Donald Trump would not have been elected president were it not for the influence of fake news (Parkinson, 2016; Read, 2016; Dewey,  2016).
    
            Facebook also has a problem detecting and censoring/banning hate speech and bullying. This often happens against minority groups, since Facebook is a public platform wherein anyone can post. Several users have expressed hate speech against PWDs in their posts and comments.In January 2019, the #BoyetChallenge was launched by many users, mimicking Boyet, one of the main characters in GMA’s show “My Special Tatay” who has autism.Following that in March, a local coffee kiosk which employs PWDs and out-of-school youth posted a screenshot of messages from a customer complaining about their deaf and blind staff.The following month, some users commented offensive statements on the profile picture of PWD, which was his graduation picture. They made fun of his disability and had very uncalled for attacks. Even prominent figures with widely followed Facebook accounts are not properly monitored by your team. Present Deputy Executive Director of the Overseas Workers Welfare Administration (OWWA) and former Palace Communications Assistant Secretary Mocha Uson and her friend Drew Olivar posted a video online with the both of them mocking sign language. (Madarang, 2019). Aside from violating your community standards, acts like areunlawful according to Republic Act 7277 or the Magna Carta for Disabled Persons. Based on the amended version or Republic Act 9442, public ridicule includes “making mockery of a person with disability whether in oral or in writing.” (ncda.gov).
    
            Your team has also committed a violation against us, your users, by blindsiding us in our patronage of your site. We therefore now feel unsafe in the very platform wherein you wanted people to build genuine connections. A whistleblower revealed that you had given the private information of more than 50 million Facebook users to the political consultancy Cambridge Analytica. This blatantly goes against the 2012 consent decree you had signed stemming from a previous FTC investigation into privacy concerns to better protect user privacy (Paul, 2019). Due to this, several users have already lost their trust in Facebook, deactivating their accounts or not posting as much as they used to. We now have a Facebook community not only wary of malicious users, but also of the very people who created this “connected community”.
    
            If you are not able to enforce stricter policies, there will be a feeling of mistrust and skepticism in your users. The presence of fake news producers in a platform has several possible social costs. First, consumers who mistake a fake outlet for a legitimate one have less-accurate beliefs are misinformed. Second, users may also become more skeptical of news producers who put out legitimate news, as they become hard to distinguish from fake news. Third, it a reduces demand for high-precision, low-bias reporting, in turn reducing the incentives to invest in accurate reporting and truthfully report signals. These negative effects are not worth the entertainment value those who do like reading fake news get (Alcott & Gentzkow, 2017). Additionally, on a national level, these fake news manipulate the unknowing public, undermining the ability of the democratic process to select quality candidates. The victory of presidential candidate Donald Trump is a manifestation of this.
    
            For hate speech and bullying, this act targets your disadvantaged users, and puts them further in the peripheries. It makes them feel like even social media is unsafe for them, and affects their perception of themselves. It severely cuts down their self-esteem, and makes them ashamed of the public. In most cases, PWDs choose not to pursue an education or are forced to drop out of school because of the bullying they experience in school and online. I would like to to hear about my cousin’s story. She is diagnosed with bipolar disorder, and was already going to graduate from high school. However, due to mean Facebook posts and comments directed to people with mental disabilities and her specifically, she was forced to drop out of school so she would not come in contact with her bullies again. She was supposed to graduate that year.
    
            In order to improve Facebook, I would suggest enforcing stricter policies and covering a larger area of monitoring. I also recommend fundamental structural reforms suggested by Senator Mark R Warner of Virginia, Given Facebook’s repeated privacy breaches. Facebook should have a checks-and-balances system even among its executive board. If possible, each country must have a government official monitoring the Facebook higherups, as to ensure that the site is not used for the team’s personal gain. A representative from the Senate Committee on Public Information and Mass Media or any counterpart to this would help with transparency. As for monitoring hate speech and fake news, Facebook should update its language base. It is usually comments and posts that are not in English that manage to slip under the radar. Facebook can already do translations, but they are still not accurate. Further development of the multi-language feature of Facebook could greatly help in better monitoring posts. 
    

    Thank you for your time and kind consideration.

    In good faith,
    Giland Lim

    REFERENCES:
    Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Social Media and Fake News in the 2016 Election, 31, 211–236. doi: 10.3386/w23089

    Dewey, C. (2016). “Facebook Fake-News Writer: ‘I Think Donald Trump is in the White House because of Me.’” Washington Post, November, 17. https://www.washingtonpost. com/news/the-intersect/wp/2016/11/17/ facebook-fake-news-writer-i-think-donald-trump-is- in-the-white-house-because-of-me/.

    Graham-Harrison, E., & Cadwalladr, C. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. Retrieved October 14, 2019, from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election.

    Madarang, C. R. S. (2019, April 2). Differently abled student achieves dream of graduating despite being mocked. Retrieved October 14, 2019, from http://www.interaksyon.com/trends-spotlights/2019/04/02/146679/concerned-filipinos-hit-back-at-people-mocking-differently-abled-students-graduation-photo/.

    Parkinson, H. (2016). “Click and Elect: How Fake News Helped Donald Trump Win a Real Election.” Guardian, November 14

    RA 9442 – An Act Amending Republic Act No. 7277, Otherwise Known As The “Magna Carta For Disabled Persons, And For Other Purposes”. (n.d.). Retrieved October 14, 2019, from https://www.ncda.gov.ph/disability-laws/republic-acts/republic-act-9442/.

    Read, M. (2016). “Donald Trump Won because of Facebook.” New York Magazine, November 9.

    Wingfield, N., Isaac, M., & Benner, K. (2016, November 14). Google and Facebook Take Aim at Fake News Sites. The New York TImes. Retrieved from https://www.mediapicking.com/medias/files_medias/nytimes—google-and-facebook-take-aim-at-fake-news-sites-0237488001479491012.pdf

    Wong, J. C. (2019, July 12). Facebook to be fined $5bn for Cambridge Analytica privacy violations – reports. Retrieved October 14, 2019, from https://www.theguardian.com/technology/2019/jul/12/facebook-fine-ftc-privacy-violations.

  18. POSTING FOR GILAND LIM

    Dear Mr. Mark Zuckerberg:

            Greetings.
    
            I am writing to you to inform you that the steps Facebook is currently taking to enforce its policies and community standards to safeguard its users is not sufficient. Several abusive and offensive posts were posted undetected, some of which are still up until now. Users are violating many of the policies which are supposed to keep the Facebook community safe, and even the Facebook team themselves are violating their users’ right to privacy. Among these violations that still circulate are fake news, hate speech, bullying, and privacy breach. 
    
            Fake news on the Internet, have been more widely shared on your platform than other social media platforms (Silverman, 2016). Sadly, many users who see fake news articles claim that to believe them (Silverman & Singer-Vine, 2016). Putting these facts together, fake news plays a powerful and influential role in the digital realm, affecting even national events. It even played a big part during the 2016 USA Presidential Campaign. Facebook has been caught in the center of the issue, being accused of swinging some voters in favor of then-candidate Donald Trump through allowing misleading and outright wrong stories to be spread on the platform (Wingfield et al., 2016). These included a report from wtoe5news.com with the headline“FBI agent suspected in Hillary email leaks found dead in apparent murder-suicide.” and that Pope Francis had personally endorsed Trump’s campaign (Alcott & Gentzkow, 2017).Experts and analytics have suggested that Donald Trump would not have been elected president were it not for the influence of fake news (Parkinson, 2016; Read, 2016; Dewey,  2016).
    
            Facebook also has a problem detecting and censoring/banning hate speech and bullying. This often happens against minority groups, since Facebook is a public platform wherein anyone can post. Several users have expressed hate speech against PWDs in their posts and comments.In January 2019, the #BoyetChallenge was launched by many users, mimicking Boyet, one of the main characters in GMA’s show “My Special Tatay” who has autism.Following that in March, a local coffee kiosk which employs PWDs and out-of-school youth posted a screenshot of messages from a customer complaining about their deaf and blind staff.The following month, some users commented offensive statements on the profile picture of PWD, which was his graduation picture. They made fun of his disability and had very uncalled for attacks. Even prominent figures with widely followed Facebook accounts are not properly monitored by your team. Present Deputy Executive Director of the Overseas Workers Welfare Administration (OWWA) and former Palace Communications Assistant Secretary Mocha Uson and her friend Drew Olivar posted a video online with the both of them mocking sign language. (Madarang, 2019). Aside from violating your community standards, acts like areunlawful according to Republic Act 7277 or the Magna Carta for Disabled Persons. Based on the amended version or Republic Act 9442, public ridicule includes “making mockery of a person with disability whether in oral or in writing.” (ncda.gov).
    
            Your team has also committed a violation against us, your users, by blindsiding us in our patronage of your site. We therefore now feel unsafe in the very platform wherein you wanted people to build genuine connections. A whistleblower revealed that you had given the private information of more than 50 million Facebook users to the political consultancy Cambridge Analytica. This blatantly goes against the 2012 consent decree you had signed stemming from a previous FTC investigation into privacy concerns to better protect user privacy (Paul, 2019). Due to this, several users have already lost their trust in Facebook, deactivating their accounts or not posting as much as they used to. We now have a Facebook community not only wary of malicious users, but also of the very people who created this “connected community”.
    
            If you are not able to enforce stricter policies, there will be a feeling of mistrust and skepticism in your users. The presence of fake news producers in a platform has several possible social costs. First, consumers who mistake a fake outlet for a legitimate one have less-accurate beliefs are misinformed. Second, users may also become more skeptical of news producers who put out legitimate news, as they become hard to distinguish from fake news. Third, it a reduces demand for high-precision, low-bias reporting, in turn reducing the incentives to invest in accurate reporting and truthfully report signals. These negative effects are not worth the entertainment value those who do like reading fake news get (Alcott & Gentzkow, 2017). Additionally, on a national level, these fake news manipulate the unknowing public, undermining the ability of the democratic process to select quality candidates. The victory of presidential candidate Donald Trump is a manifestation of this.
    
            For hate speech and bullying, this act targets your disadvantaged users, and puts them further in the peripheries. It makes them feel like even social media is unsafe for them, and affects their perception of themselves. It severely cuts down their self-esteem, and makes them ashamed of the public. In most cases, PWDs choose not to pursue an education or are forced to drop out of school because of the bullying they experience in school and online. I would like to to hear about my cousin’s story. She is diagnosed with bipolar disorder, and was already going to graduate from high school. However, due to mean Facebook posts and comments directed to people with mental disabilities and her specifically, she was forced to drop out of school so she would not come in contact with her bullies again. She was supposed to graduate that year.
    
            In order to improve Facebook, I would suggest enforcing stricter policies and covering a larger area of monitoring. I also recommend fundamental structural reforms suggested by Senator Mark R Warner of Virginia, Given Facebook’s repeated privacy breaches. Facebook should have a checks-and-balances system even among its executive board. If possible, each country must have a government official monitoring the Facebook higherups, as to ensure that the site is not used for the team’s personal gain. A representative from the Senate Committee on Public Information and Mass Media or any counterpart to this would help with transparency. As for monitoring hate speech and fake news, Facebook should update its language base. It is usually comments and posts that are not in English that manage to slip under the radar. Facebook can already do translations, but they are still not accurate. Further development of the multi-language feature of Facebook could greatly help in better monitoring posts. 
    

    Thank you for your time and kind consideration.
    In good faith,
    Giland Lim

    REFERENCES:
    Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Social Media and Fake News in the 2016 Election, 31, 211–236. doi: 10.3386/w23089
    Dewey, C. (2016). “Facebook Fake-News Writer: ‘I Think Donald Trump is in the White House because of Me.’” Washington Post, November, 17. https://www.washingtonpost. com/news/the-intersect/wp/2016/11/17/ facebook-fake-news-writer-i-think-donald-trump-is- in-the-white-house-because-of-me/.
    Graham-Harrison, E., & Cadwalladr, C. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. Retrieved October 14, 2019, from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election.
    Madarang, C. R. S. (2019, April 2). Differently abled student achieves dream of graduating despite being mocked. Retrieved October 14, 2019, from http://www.interaksyon.com/trends-spotlights/2019/04/02/146679/concerned-filipinos-hit-back-at-people-mocking-differently-abled-students-graduation-photo/.
    Parkinson, H. (2016). “Click and Elect: How Fake News Helped Donald Trump Win a Real Election.” Guardian, November 14
    RA 9442 – An Act Amending Republic Act No. 7277, Otherwise Known As The “Magna Carta For Disabled Persons, And For Other Purposes”. (n.d.). Retrieved October 14, 2019, from https://www.ncda.gov.ph/disability-laws/republic-acts/republic-act-9442/.
    Read, M. (2016). “Donald Trump Won because of Facebook.” New York Magazine, November 9.
    Wingfield, N., Isaac, M., & Benner, K. (2016, November 14). Google and Facebook Take Aim at Fake News Sites. The New York TImes. Retrieved from https://www.mediapicking.com/medias/files_medias/nytimes—google-and-facebook-take-aim-at-fake-news-sites-0237488001479491012.pdf
    Wong, J. C. (2019, July 12). Facebook to be fined $5bn for Cambridge Analytica privacy violations – reports. Retrieved October 14, 2019, from https://www.theguardian.com/technology/2019/jul/12/facebook-fine-ftc-privacy-violations.

  19. Dear Mr. Sundar Pichai:

    Ever since YouTube was launched fourteen years ago, people are able to post all kinds of content freely as long as it complies to the community guidelines. Although, there are some videos that pass through the guidelines despite having content that violates these policies. Minors are a large part of the community and being exposed to videos with explicit content could harm them. Moreover, content creators whose target audience are minors have been posting videos that neglect the community guidelines, imposing sexual content, harm, and violence to minors.

    To cite an example, content creator Jake Paul posts videos such as pranks, challenges, and vlogs taken to an extremely dangerous level but his target audience include minors. One of his videos “promoted a shady gambling website to their millions of subscribers, many of whom are children” (Jennings, 2019). The website only allows minors to participate. There were incidents of the customers not receiving their products, tracking number not working, and additional payments. Another video of this certain account involves the Bird Box challenge where he was blindfolded while driving and walking in the middle of the street. Although the video has been taken down, there are re-uploads of the video. His videos are not age-restricted, eliciting views from minors who may possibly do the same (Alexander, 2019). YouTube clearly states that “extremely dangerous challenges” and “dangerous or threatening pranks” should not be posted. As a result of Paul’s videos continuing to be posted and streamed on the site, minors are compelled to do the same because he stands as a figure to be followed. The very policy of not posting content that endangers the emotional or physical well-being of minors is already violated.

    In your team’s effort to reduce the exposure of minors to these kinds of videos, it is said that they have “removed more than 800,000 videos for violations of our child safety policies, the majority of these before they had ten views” during the first quarter of 2019. This is not enough to cover the millions of videos posted on the site that shows harmful content. Such in the case of Paul’s Bird Box challenge, it has been taken down but re-uploaded and can still be viewed by minors. Regardless of viewership or subscriber count, videos with these kinds of content should be taken down. To reduce the possibilities of the account creating a new channel despite having been terminated, the IP address of the user could be blocked from the site. Absolutely changing the algorithm of the viewers in order to protect the minors could also be considered.

    Hoping for your consideration. Thank you.

    Sincerely,
    Ayie Rodriguez

    References:
    Jennings, R. (2019). YouTube stars promoted gambling to kids. Now they have to answer to their peers. Retrieved from https://www.vox.com/the-goods/2019/1/4/18167341/youtube-jake-paul-ricegum-mystery-brand

    Alexander, J. (2019). Jake Paul shows off dangerous stunts for Bird Box challenge. Retrieved from https://www.theverge.com/2019/1/7/18172657/jake-paul-bird-box-challenge-youtube-blindfold-netflix

    The YouTube Team. (2019). An update on our efforts to protect minors and families. Retrieved from https://youtube.googleblog.com/2019/06/an-update-on-our-efforts-to-protect.html

  20. Dear Mr. Mark Zuckerberg:

          Greetings.
    
          I am writing to you to inform you that the steps Facebook is currently taking to enforce policies and community standards to safeguard its users is not sufficient. Several abusive and offensive posts were posted undetected, some of which are still up until now. Users are violating many of the policies which are supposed to keep the Facebook community safe, and but even the Facebook team yourselves are violating your users' right to privacy. Among these violations that still circulate are fake news, hate speech, bullying, and privacy breach.
    
          Fake news on the internet, have been more widely shared on your platform than any other social media platforms (Silverman, 2016). Sadly, many users who see fake news articles claim to believe them (Silverman & Singer-Vine, 2016). Putting these facts together, fake news plays a powerful and influential role in the digital realm, affecting even national events. It even played a big role during the 2016 USA Presidential Campaign. Facebook has been caught in the center of the issue, being accused of swinging some voters in favor of then-candidate Donald Trump by allowing misleading and outright wrong stories to be spread on the platform (Wingfield et al., 2016) These included a report from wtoe5news.com with the headline "FBI agent suspected in Hillary email leaks found dead in apparent murder-suicide" and that Pop Francis had personally endorsed Trump's campaign (Alcott & Gentzkow, 2017). Experts and analytics have suggested that Donald Trump would not have been elected president were it not for the influence of fake news (Parkinson, 2016; Read, 2016; Dewey, 2016).
    
          Facebook also has a problem detecting and censoring/banning hate speech and bullying. This often happens against minority groups and the disadvantaged, since Facebook is a public platform wherein anyone can participate in. Several users have expressed hate speech against PWDs in their posts and comments. In January 2019, the #BoyetChallenge was launched by many users, mimicking Boyet, one of the main characters in the GMA 7 show, "My Special Tatay" who has autism. Following that, in March, a local coffee kiosk which empoys PWDs and out-of-school youth posted a screenshot of messages from a customer complaining about their deaf and blind staff. The following month, some users commented offensive statements on the profile picture of the PWD, which was his graduation picture. They made fun of his disability and had very uncalled for attacks. Even prominent figures with widely followed Facebook accounts are not properly monitored by your team. President Deputy Executive Director of the Overseas Workers Welfare Administration (OWWA) and former Palace Communications Assistant Secretary Mocha Uson and her friend Drew Olivar posted a video online with the both of them mocking sign language (Madarang, 2019). Aside from violating your community standards, acts like these are unlawful according to Republic Act 7277 or the Magna Carta for DIsabled Persons. Based on the amended version or RA 9442, public ridicule includes "making mockery of a person with disability whether in oral or in writing" (ncda.gov).
    
          Your team has also committed a violation against us, your users, by blindsiding us in our patronage of your site. We therefore now feel unsafe in the very platform wherein you wanted our people to build genuine connections. A whitstleblower revelaed that you had given the private information of more than 50 million Facebook users to the political consultancy Cambridge Analytica. This blatantly goes against the 2012 consent degree you had signed stemming from a previous FTC investigation into privacy concerns to better protect user privacy (Paul, 2019). Due to this, several users have already lost their trust in Facebook, deactivating their accounts or not posting as much as they used to. We now have a Facebook community not only wary of malicious users, but also of the very people who created this "connected community".
    
          If you are not able to enforce stricter policies, there will be a feeling of mistrust and skepticism among your users. The presence of fake news producers in a platform has several possible social costs. First, consumers who mistake a fake outlet for a legitimate one have less-accurate beliefs that are misinformed. Second, users may also become more skeptical of news producers who put out legitimate news, as they become hard to distinguish from fake news. Third, it reduces demand for high-precision, low-bias reporting, in turn reducing the incentives to invest in accurate reporting and truthfully report signals. These negative effects are not worth the entertainment value those who do like reading fake news get (Alcott & Gentzkow, 2017). Additionally, on a national level, these fake news manipulate the unknowing public, undermining the ability of the democratic process to select quality candidates. The victory of presidential candiate Donal Trump is a manifestation of this.
    
          For hate speech and bullying, this act targets your disadvantaged users, and puts them further in the peripheries. It makes them feel like even social media is unsafe for them, and affects their perception of themselves. It severly cutes down their self-esteem, and makes them ashamed of the public. In most cases, PWDs choose not to pursue an education or are forced to drop out of school because of the bullying they experience in school and online. I would like you to hear my cousin's story. She is diagnosed with bipolar disorder and was in high school. However, due oto mean Facebook comments directed to people with mental health issues and her specifically, she was forced to drop out of school so she would not come in contact with her bullies again. She was supposed to graduate that year.
    
          In order to improve Facebook, I would suggest enforcing stricter policies and covering a larger area of monitoring. I also recommend fundamental structural reforms suggested by Senator Mark R. Warner of Virginia, given Facebook's repeated privacy breaches. Facebook should have a checks-and-balances system even among its executive board. If possible, each country must have a government representative to Facebook, monitoring the Facebook higher-ups, as to ensure that the site is not used for the team's personal gain. A representative from the Senate Committee on Public Information and Mass Media or any counterpart to this would help with transparency. As for monitoring hate speech and fake news, Facebook should update its language base. It is usually comments that are not in English that manage to slip under the radar. Facebook can already to translations, but they are still not accurate. Further development of the multi-language feature of Facebook could greatly help in better monitoring posts.
    
          Thank you for your time and kind consideration.
    

    In good faith,
    Giland LIm

  21. REFERENCES:
    Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Social Media and Fake News in the 2016 Election, 31, 211–236. doi: 10.3386/w23089

    Dewey, C. (2016). “Facebook Fake-News Writer: ‘I Think Donald Trump is in the White House because of Me.’” Washington Post, November, 17. https://www.washingtonpost. com/news/the-intersect/wp/2016/11/17/ facebook-fake-news-writer-i-think-donald-trump-is- in-the-white-house-because-of-me/.

    Graham-Harrison, E., & Cadwalladr, C. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. Retrieved October 14, 2019, from https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election.

    Madarang, C. R. S. (2019, April 2). Differently abled student achieves dream of graduating despite being mocked. Retrieved October 14, 2019, from http://www.interaksyon.com/trends-spotlights/2019/04/02/146679/concerned-filipinos-hit-back-at-people-mocking-differently-abled-students-graduation-photo/.

    Parkinson, H. (2016). “Click and Elect: How Fake News Helped Donald Trump Win a Real Election.” Guardian, November 14
    RA 9442 – An Act Amending Republic Act No. 7277, Otherwise Known As The “Magna Carta For Disabled Persons, And For Other Purposes”. (n.d.). Retrieved October 14, 2019, from https://www.ncda.gov.ph/disability-laws/republic-acts/republic-act-9442/.

    Read, M. (2016). “Donald Trump Won because of Facebook.” New York Magazine, November 9.

    Wingfield, N., Isaac, M., & Benner, K. (2016, November 14). Google and Facebook Take Aim at Fake News Sites. The New York TImes. Retrieved from https://www.mediapicking.com/medias/files_medias/nytimes—google-and-facebook-take-aim-at-fake-news-sites-0237488001479491012.pdf

    Wong, J. C. (2019, July 12). Facebook to be fined $5bn for Cambridge Analytica privacy violations – reports. Retrieved October 14, 2019, from https://www.theguardian.com/technology/2019/jul/12/facebook-fine-ftc-privacy-violations.

  22. Dear Mr. Evan Clark Williams

    This letter wishes to extend to you my concern regarding Twitter’s enforcement of rules and standards, particularly the policy on Non-consensual Nudity. Although the 2019 issue of the rules and standards and the Twitter algorithm for detecting were active in taking down such posts, I believe that these are not enough in detecting more locally occurring cases due to the limited understanding of the algorithm on our local language and social media slangs.
    
    Back in 2017, several cases on leaked nudes tied up international personalities to scandals. These include a solo artist who’s a previous boy band member, two female singers, and an actress. One local celebrity was even involved. These results were taken just from a single Twitter search on “leaked nudes”. By the years 2016-2017, there is a clear decrease in these issues due to the newly-implemented rules and policies that took down such posts. However, earlier this year, a local teen celebrity’s nude video circulated the Twitter space. The celebrity encountered slut-shaming remarks from netizens despite her being the victim of the leaking. This practice is also known as image-based violence, or “revenge porn” where people’s nude images or videos are spread online without their consent (Vanian, 2017). These celebrities are lucky that they were able to address the issues backed by Twitter’s report feature. However, the same cannot be told for smaller names.
    

    Outside the showbiz industry, more local netizens experience this non-consensual sharing of nude pictures and even videos containing sexual activity. In the “alter world”, the Twitterverse where people post and share photos or videos of themselves and their sexual activities, alter accounts often tweet photos and videos of their sexual encounters with people who either didn’t consent to engage in the sexual act or posting of such videos and photos; or both. Sometimes, these include minors who aren’t eligible to consent yet for such activities.

    Revenge Porn poses severe consequences to victims of non-consensual sharing of their nude photos and/or videos. Studies show that victims of revenge porn suffer the same trauma victims of sexual assault experience (Bates, 2016). Victims may experience PTSD, with symptoms ranging from panic attacks, night terrors, and anxiety (Ehrenkranz, 2018). Additionally, US studies show that 51% of survivors of revenge porn encounter suicidal thoughts, due to the feeling of sexual exploitation they feel which goes beyond the online world.

    With the Twitter algorithm centered on identifying tweets based on its content and the language it is written, I recommend to either improve the AI used in detecting such posts so it could take down local posts containing Filipino language, and even other languages not yet part of the algorithm, and local slangs; or at least employ people who monitor local posts with sensitive material so they could identify which posts contain non-consensual nudity.

    Thank you for your consideration.

    References:

    Vanian, J. (2017) Twitter Wants to Crack Down on Revenge Porn. Fortune. Retrieved from https://fortune.com/2017/10/27/nudity-revenge-porn-twitter/

    Twitter (2019) Search Results for “leaked nudes”. Twitter. Retrieved from https://twitter.com/search?q=leaked%20nudes&src=typed_query&f=live&lf=on

    Twitter (2019) Non-consensual Nudity Policy. Twitter Help Center. Retrieved from https://help.twitter.com/en/rules-and-policies/intimate-media

    Ehrenkranz, M. (2018) We Need to Study the Effects of Revenge Porn on Mental Health. Gizmodo. Retrieved from https://gizmodo.com/we-need-to-study-the-effects-of-revenge-porn-on-mental-1823086576

    Bates, S. (2016) Study Reveals Revenge Porn Victims Suffer Similar Trauma As Sexual Assault Victims. Fight The New Drug. Retrieved from https://fightthenewdrug.org/revenge-porn-survivors-suffer-similar-trauma-assault-survivors/

  23. Dear Mr. Evan Clark Williams,

    I am writing to you to complain in the strongest terms about the enforcement of rules and standards of Twitter. Although there are policies available to avoid certain violations, I believe that the monitoring of these violations have not been enough. Numerous accounts of hate speech are still available for viewing up to this day despite Twitter’s hateful conduct policy.

    To further explain my point, there was a tweet by Donald Trump last April that included footage of Rep. Ilhan Omar, one of the first Muslim women to serve in Congress, interspliced with footage of the 9/11 terrorist attacks which is still up although the purpose of the tweet quite clearly was to stereotype all Muslims as terrorists. According to Cameron(2019), while Trump’s tweet about Rep. Omar remains online, the president himself is exempt from punishment because, according to Twitter, anything he has to say falls under a “public interest” exemption.

    This has caused Rashad Robinson, president of Color of Change, America’s largest online racial justice organization to call Twitter’s hateful conduct policy as “too simplistic for the complicated world we live in, and fails to address the nuanced intersections of its users’ identities.” dEspite Twitter’s best efforts to update its hateful conduct policy, it still tends to ignore coded language and imagery, such as Trump’s video of Rep Oma.

    To be able to address this issue, Twitter needs to review its hateful conduct policy especially with regards to its coverage and extent. The problem with Twitter’s hateful conduct policy has already been long standing and seeking public comments especially from experts on hate speech is recommended.

    I am hoping for your consideration regarding this matter.

    References:

    Cameron, D. (2019, July 9). Civil Rights Groups Mostly Unimpressed by New Twitter Policy Against ‘Dehumanizing’ Language. Retrieved October 16, 2019, from https://gizmodo.com/civil-rights-groups-mostly-unimpressed-by-new-twitter-p-1836227745.

    Twitter. (2019). Hateful conduct policy. Retrieved October 16, 2019 from https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy

  24. Dear Mr. Sundar Pichai,
    Since the creation of Youtube, it has been a lot more easier to express one’s self creatively on a platform that is free to use for everyone, regardless of background. However, this has also allowed the proliferation of content, specifically sexual content, that is not appropriate for its young audiences. In place are policies against explicit content that are sexually gratifying, as well as the Child Safety Policy. Such content can be removed or thus age-restricted.
    However, these are not enough to safeguard the well-being of its users as some sexual content be subversively included in video contents, or the timeline itself, with instances wherein pirates have by-passed Google and Youtube and started hosting pornographic videos without Youtube’s knowledge (Griffin, 2017). These and all other gross contents are driven by revenue, which further encourages these people to put inappropriate content (abortion, pedophilia, etc.) in your platform. In an informative video by Rebel News, these contents are presented to kids using children’s characters, all the more making it hard to track and detect.
    In an article by CNBC, medical experts have warned the public of the effects of such types of disturbing content on the developing brain. According to Volpitta, founder of The Center for Resilient Leadership, “Children who repeatedly experience stressful and/or fearful emotions may underdevelop parts of their brain’s prefrontal cortex and frontal lobe, the parts of the brain responsible for executive functions, like making conscious choices and planning ahead,” further implying the negative impact of such unregulated content.
    In light of such, I recommend a more thorough examination of each content targeted towards young audiences. Given the scale of the company, it is not hard to imagine that they implement stricter regulations as well as utilize more technology and manpower to make sure that they are no hidden content that is inappropriate for children and youth. In addition, a route that can be taken is to demonetize these types of content, as well as to reach out to proper authorities to further investigate and press charges against perpetrators of such actions.
    – from University of the Philippines students

    References:

    Bila, J. (2018, February 13). YouTube’s dark side could be affecting your child’s mental health. Retrieved from CNBC: https://www.cnbc.com/2018/02/13/youtube-is-causing-stress-and-sexualization-in-young-children.html
    Google. (n.d.). Nudity and sexual content policies. Retrieved from Youtube Support: https://support.google.com/youtube/answer/2802002?hl=en
    Griffin, A. (2017, January 16). PORN VIDEOS SECRETLY HIDDEN ON YOUTUBE AS PIRATES BYPASS GOOGLE’S SEXUAL CONTENT CONTROLS. Retrieved from Independent: https://www.independent.co.uk/life-style/gadgets-and-tech/news/porn-videos-secret-hiddent-youtube-priates-hacking-google-nudity-controls-censorship-graphic-content-a7529821.html
    The Rebel News. (2017, June 26). Sex Content on YouTube… For Kids? Retrieved from Youtube: https://www.youtube.com/watch?v=U5cowWPmO5Q&feature=youtu.be&t=352

  25. Dear Mr. Mark Zuckerberg,

    I would like to raise some concerns regarding Facebook’s system of filtering cruel and insensitive content. Although the platform has already released a community standard disallowing these kinds of behaviors, the measures are not enough to detect and remove cruel and insensitive posts, especially in countries where English is a secondary or tertiary language, and in countries with several existing languages. Furthermore, in the latest update on the status of reinforcing Facebook’s community standards (Rosen, 2019), there was no data on the prevalence and removal of cruel and insensitive content on the platform.

    Facebook is a platform where free speech is celebrated but in recent years, it has also been a witness to the culture of victim-blaming particularly towards sexual assault victims. For instance, early this year in the Philippines, a news broke out about a girl who was raped by her friends after she got drank (enclosed in this letter the link to the news report). It has 500+ comments and if you scroll through the comment section, you would see people blaming the girl for drinking. Most of the comments are written in Filipino and, occasionally, in other languages such as Waray. To quote some: “kapag may alak, may balak” (if there is alcohol, there is a motive for sex), “ikaw rin mismo gumawa ng paraan na mapahamak sarili mo” (it is you who chose to endanger yourself”, “baka ginusto mo rin” (maybe you unconsciously wanted it), “buti nga sa’yo” (serves you right).

    It has been proven by several studies that victim-blaming impacts the psychological well-being of victims of violence. In the case of blaming sexual assault victims, it interferes with their intention to report the crime, questions the legitimacy of the crime, or worse, make the victim deny or blame herself for the crime (Young, 2016; Cravens et al., 2015). Victim blaming forces the victims to be silent and shifts the responsibility from the perpetrators to the victims.

    In light of this, I would like to recommend for Facebook to hire moderators in each country where Facebook has a huge number of users for them to be able to consider the language prevalent in their respective countries. A cultural lens in your community standard will surely help in fighting the prevalence of victim-blaming and other rape myths in society. Furthermore, Facebook’s AI system should also include detecting cruel and insensitive comments because comments are rarely reported by users.

    I hope that you take my recommendations for consideration. Sexual assault victims have a higher tendency to be blamed than other interpersonal crime victims (Gravelin, Biernat, & Butcher, 2019; Bieneck & Krahé, 2011) and I am putting my confidence that Facebook will not become a community that enables victim-blaming. This also goes to victims of all forms of violence and crime.

    Sincerely,

    Robelyn Bautista

    Link to mentioned news report: https://facebook.com/story.php?story_fbid=10156852288380168&id=27254475167

    References
    Bieneck, S., & Krahé, B. (2011). Blaming the victim and exonerating the perpetrator in cases of rape and robbery: is there a double standard? Journal of Interpersonal Violence, 26, 1785–1797. doi: 10.1177/0886260510372945

    Cravens, J. D., Whiting, J. B., & Aamar, R. (2015). Why I stayed/left: An analysis of voices of intimate partner violence on social media. Contemporary Family Therapy, 37, 372-385. doi:10.1007/s10591-015-9360-8

    Gravelin, C. R., Biernant, M., & Bucher, C. E. (2019). Blaming the victim of acquaintance rape: Individual, situational, and sociocultural factors. Frontiers in Psychology, 9. Retrieved from https://www.frontiersin.org/article/10.3389/fpsyg.2018.02422

    Rosen, G. (2019, May 23). An update on how we are doing at enforcing our community standards. Retrieved from https://newsroom.fb.com/news/2019/05/enforcing-our-community-standards-3/

    Young, C. F. (2016, May 18). The consequences of victim-blaming: Sexual assault and higher education. Retrieved from https://injury.research.chop.edu/blog/posts/consequences-victim-blaming-sexual-assault-and-higher-education#.XachckYzY2w

  26. Dear Mr. Mark Zuckerberg,

    I would like to raise some concerns regarding Facebook’s system of filtering cruel and insensitive content. Although the platform has already released a community standard disallowing these kinds of behaviors, the measures are not enough to detect and remove cruel and insensitive posts, especially in countries where English is a secondary or tertiary language, and in countries with several existing languages. Furthermore, in the latest update on the status of reinforcing Facebook’s community standards (Rosen, 2019), there was no data on the prevalence and removal of cruel and insensitive content on the platform.

    Facebook is a platform where free speech is celebrated but in recent years, it has also been a witness to the culture of victim-blaming particularly towards sexual assault victims. For instance, early this year in the Philippines, a news broke out about a girl who was raped by her friends after she got drank (enclosed in this letter the link to the news report). It has 500+ comments and if you scroll through the comment section, you would see people blaming the girl for drinking. Most of the comments are written in Filipino and, occasionally, in other languages such as Waray. To quote some: “kapag may alak, may balak” (if there is alcohol, there is a motive for sex), “ikaw rin mismo gumawa ng paraan na mapahamak sarili mo” (it is you who chose to endanger yourself”, “baka ginusto mo rin” (maybe you unconsciously wanted it), “buti nga sa’yo” (serves you right).

    It has been proven by several studies that victim-blaming impacts the psychological well-being of victims of violence. In the case of blaming sexual assault victims, it interferes with their intention to report the crime, questions the legitimacy of the crime, or worse, make the victim deny or blame herself for the crime (Young, 2016; Cravens et al., 2015). Victim blaming forces the victims to be silent and shifts the responsibility from the perpetrator to the victims.

    In light of this, I would like to recommend for Facebook to hire moderators in each country where Facebook has a huge number of users for them to be able to consider the language prevalent in their respective countries. A cultural lens in your community standard will surely help in fighting the prevalence of victim-blaming and other rape myths in society. Furthermore, Facebook’s AI system should also include detecting cruel and insensitive comments because comments are rarely reported by users.

    I hope that you take my recommendations for consideration. Sexual assault victims have a higher tendency to be blamed than other interpersonal crime victims (Gravelin, Biernat, & Butcher, 2019; Bieneck & Krahé, 2011) and I am putting my confidence that Facebook will not become a community that enables victim-blaming. This also goes to victims of all forms of violence and crime.

    Sincerely,

    Robelyn Bautista

    Link to mentioned news report: https://facebook.com/story.php?story_fbid=10156852288380168&id=27254475167

    References
    Bieneck, S., & Krahé, B. (2011). Blaming the victim and exonerating the perpetrator in cases of rape and robbery: is there a double standard? Journal of Interpersonal Violence, 26, 1785–1797. doi: 10.1177/0886260510372945

    Cravens, J. D., Whiting, J. B., & Aamar, R. (2015). Why I stayed/left: An analysis of voices of intimate partner violence on social media. Contemporary Family Therapy, 37, 372-385. doi:10.1007/s10591-015-9360-8

    Gravelin, C. R., Biernant, M., & Bucher, C. E. (2019). Blaming the victim of acquaintance rape: Individual, situational, and sociocultural factors. Frontiers in Psychology, 9. Retrieved from https://www.frontiersin.org/article/10.3389/fpsyg.2018.02422

    Rosen, G. (2019, May 23). An update on how we are doing at enforcing our community standards. Retrieved from https://newsroom.fb.com/news/2019/05/enforcing-our-community-standards-3/

    Young, C. F. (2016, May 18). The consequences of victim-blaming: Sexual assault and higher education. Retrieved from https://injury.research.chop.edu/blog/posts/consequences-victim-blaming-sexual-assault-and-higher-education#.XachckYzY2w

  27. Dear Mr. Mark Zuckerberg,

    Greetings. I am a student from the University of the Philippines Diliman and I am writing to you to express my concern about Facebook and how it is being used to propagate hate and incite violence.
    
    I am aware that Facebook has guidelines and community standard that condemns such incitement of violence and propagation of hate, especially those that may lead to offline or real-world harm (Facebook, 2019). I believe that these guidelines and standards are not enough because these Facebook posts have claimed lives and rights that cannot be given back or remedied.
    
    This is evident in how Facebook has been used to incite violence that led to genocide for the Rohingya people of Myanmar. In a New York Times article by Paul Mozur (2018), it has been revealed that Facebook has been used by Myanmar military personnel to spread hateful propaganda against the Rohingya people, a minority group in Myanmar. Facebook has been used as a tool for this ethnic cleansing against the Rohingya, “inciting murders, rapes, and the largest forced human migration in recent history (Mozur, 2018).”
    
    These systematic campaigns against the Rohingya spanned back almost half a decade and despite Facebook taking down the official accounts of Myanmar military leaders in August of last year, these propaganda campaigns were allowed to stay on Facebook for quite a period of time as Facebook failed to detect these fake names and false accounts.
    
    Facebook itself admitted and confirmed many details about this military-backed campaign that contributed to the violence against the Rohingya. Facebook also admitted that they did not act fast enough to stop these propagandas. It had been too late as more than 700,000 Rohingya people has been already forced to leave their country.
    
    Indeed, Facebook, aside from banning these accounts, also made a move by banning four insurgent groups it has classified as “dangerous organizations”. Though commendable for acting upon the issue, it is quite questionable as the singling out of these four groups is “arbitrary at best and harmful at worst” (Samuel, 2019).
    
    Also commendable is Facebook’s actions to hire about 100 native Myanmar speakers to review content, which led to taking action on about 64,000 pieces of content and the taking down of 18 accounts and 52 pages related to the Myanmar military’s anti-Rohingya propaganda (Ellis-Petersen, 2018).
    
    These actions however are not enough and are too late as lives have already been lost, people have already been forced to leave their homelands, and human rights violations have already taken place. By the time the content have been discovered and taken down, the damages have been done. If this has happened to the Rohingya people, it is not far-fetched that Facebook may be used again to foment hate and violence against another vulnerable group.
    
    With this, I would like to implore Facebook to be stricter in its policies. It is not enough that these posts and accounts be taken down as they should never have been posted in the first place. Though it would be difficult to review content as they are being posted, it is important to be kept track of as a single post that is up for a single hour could already influence thousands of people. 
    
    As said, it is laudable that Facebook hires people native to the country as they are the ones who are aware of the culture and issues in that country. Facebook should continue this and hire even more people that are truly informed and involved in the political, economic, cultural, etc aspects of every country.
    

    Facebook should be extra vigilant especially in countries that are politically unstable or are experiencing any kind of unrest. Behavior that appear to be similar or systematic, or those that seem part of a larger yet singular narrative should be investigated immediately.

    Facebook should also strengthen its policies that protects the rights of marginalized groups and/or individuals.
    
    I hope that this letter would add to the discourse about what can be done about such systematic propaganda against protected groups of people, and emphasize that Facebook plays a role in a country’s collection of thoughts.
    

    Thank you and hoping for your action to combat such incidents.

    Respectfully,
    Lia Munsod

    REFERENCES:

    Ellis-Petersen, H. (2018, November 6). Facebook admits failings over incitement to violence in Myanmar. Retrieved October 15, 2019, from https://www.theguardian.com/technology/2018/nov/06/facebook-admits-it-has-not-done-enough-to-quell-hate-in-myanmar.

    Facebook. (n.d.). Violence and Incitement. Retrieved October 15, 2019, from https://www.facebook.com/communitystandards/credible_violence.

    Mozur, P. (2018, October 15). A Genocide Incited on Facebook, With Posts From Myanmar’s Military. Retrieved October 15, 2019, from https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html.

    Samuel, S. (2019, February 7). Facebook is reckoning with its role in “a textbook example of ethnic cleansing.” Retrieved October 15, 2019, from https://www.vox.com/future-perfect/2019/2/7/18214351/facebook-myanmar-rohingya-muslims.

  28. Dear Mr. Mark Zuckerberg,

    I am writing to you this letter to express my concern over the existence of child pornography on Facebook. It is worth noting that Facebook has been successful with bringing communities together and creating discourse. However, this has created a somewhat pandora’s box effect for unethical content. Even with the existence of Facebook’s community standards, I want to point that it is alarming that social network is not doing enough efforts to protect the rights of children, because of the prevailing existence of the obscene content. Facebook should not be a safe haven for predators to feast on the vulnerable.

    Many Facebook groups are still able to get away with uploading the content of child pornography. Pedophiles are able to exchange videos, because of the lack of encryption of the Messenger platform.

    In the case of Keith Liwanag, a Pinoy bodybuilder, he was able to communicate with women from the Philippines and incentivize them with money to sexually abuse children and send photographs to him. (ABS-CBN North America News Bureau, 2019). US Authorities were able to seize his exchanges with the women who enabled the sexual abuse, which reached to fifty recorded video conferences. The video he shared online was also taken note of. As a result of the crime, he must complete a 15 year jail sentence.

    The “report” link is not enough to protect the children. To be able to address this issue, Facebook must invest more on content moderators through heavier recruitment and creating policies that cater to their mental health, given the possible trauma from facilitating the flow of content on the platform. It is important to recognize that their sentiments and reactions are valid; their psychological trauma must be processed (Ragan, 2019).

    Sincerely, Angelica Taruc

    ABS-CBN North America News Bureau. (2019, March 8). Pinoy bodybuilder sentenced to 15 years in US for child porn. Retrieved October 16, 2019, from https://news.abs-cbn.com/overseas/03/08/19/pinoy-bodybuilder-sentenced-to-15-years-in-us-for-child-porn.

    Rogan, A. (2019, October 5). Ex-Facebook moderator: Zuckerberg is playing down trauma of content jobs. Retrieved October 16, 2019, from https://www.businesspost.ie/news/ex-facebook-moderator-zuckerberg-playing-trauma-content-jobs-454254.

  29. Dear Mr. Jack Dorsey,

    This letter is about Twitter’s rules on child sexual exploitation. While I appreciate that you currently have policies that are carefully laid out to protect minors from sexual predators, something seems lacking when it comes to the implementation of said policies as sexually exploitative contents that victimize children are still very much accessible on your platform.

    In an interview conducted by BBC trending with an informant, it was revealed that sexual images of children are being swapped with openness on Twitter. She (the informant) said that once she had found one account, “you click on their retweets and that opens up more accounts and it creates this rabbit hole where you just keep finding more and more child porn.”

    According to UNICEF, more than 175,000 children go online for the first time every day – a new child every half second; and so I think that it’s important that you look into this and develop solutions to this problem. I recommend a stricter implementation of your rules.

    Thank you for your consideration.

    Sources

    UNICEF.Retrieved October 16, 2019, from https://www.unicef.org/endviolence/endviolenceonline/

    BBC News.(26 November 2016). How innocent photos of children have been exploited on Twitter. Retrieved October 16, 2019 from
    https://www.google.com.ph/amp/s/www.bbc.com/news/amp/blogs-trending-38103791

  30. Dear Mr. Jack Dorsey,

    This letter is about Twitter’s rules on child sexual exploitation. While I appreciate that you currently have policies that are carefully laid out to protect minors from sexual predators, something seems lacking when it comes to the implementation of said policies as sexually exploitative contents that victimize children are still very much accessible on your platform.

    In an interview conducted by BBC trending with an informant, it was revealed that sexual images of children are being swapped with openness on Twitter. She (the informant) said that once she had found one account, “you click on their retweets and that opens up more accounts and it creates this rabbit hole where you just keep finding more and more child porn.”

    According to UNICEF, more than 175,000 children go online for the first time every day – a new child every half second; and so I think that it’s important that you look into this and develop solutions to this problem.

    Thank you for your consideration.

    Sources

    UNICEF.Retrieved October 16, 2019, from https://www.unicef.org/endviolence/endviolenceonline/

    BBC News.(26 November 2016). How innocent photos of children have been exploited on Twitter. Retrieved October 16, 2019 from
    https://www.google.com.ph/amp/s/www.bbc.com/news/amp/blogs-trending-38103791

If the comment posted does not appear here, that's because COMMENTS WITH SEVERAL HYPERLINKS ARE DETAINED BY AKISMET AT THE SPAM FOLDER.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.