Since much of the conscious activity of most of this generation revolves around mobile phones, digital devices, and the social media, either as Facebook users, Twitter users and followers, Instagrammers, youtubers, and as audience, and consumers, gamers, dreamers, fighters, and creators; and since there is no course in the entire University devoted solely to social media behavior, this semester, this section on Communication and Media Ethics will pilot a syllabus that confronts, in large part, social media behavior including: How to be safe in social media The Dark Web Hate speech (such as those that foster prejudices based on race, creed, religion, etc. including red-tagging) Trends in cybercrimes for 2019 Bullying and other threatening behavior Data Privacy Breaches Fake News Social media standards, community standards and breaches All of these will be discussed in the first five weeks. The next four weeks will be devoted to the broadcast media (KBP Code) and to specialized reporting including conflict- reporting and disaster-reporting.
Two weeks will be devoted to standards in public information and advertising.
Three weeks will be spent for ethical dilemmas in the film industry.
At least two weeks will be dedicated to the SPJ Code of Ethics and Philippine journalists code of ethics.
Two weeks will be alloted to research guidelines on plagiarism. As part of the intro, class members are advised to read and study the following topics and articles, and to accomplish the profile below, deadline August 15 at 7pm (class members who fail to meet the deadline will be considered as not present in class). INTRODUCTION (class members are advised to read and study the following topics and articles for the introduction-part of the discussion on social media behavior)
(video credits: as stated in the video) THE DARK WEB Dark Web connects PH to mass shootings in US, Filipino pols’ vanities by: Tony S. Bergonia – @inquirerdotnet INQUIRER.net / 07:39 PM August 06, 2019 (embedded below in green font) Manila, Philippines – The dark corners of the Web are turning the Philippines into an international hub for internet trolling and operators of sites that make their life-changing impacts, like the massacre of 22 people in Texas, felt in as far as the United States, according to various reports gathered and studied by INQUIRER.net. From a pig farm in an undisclosed location in the Philippines to popular coffee shops in the country’s urban business districts, the dark side of the Web is churning out material ranging from seemingly harmless but eyebrow-raising claims of popularity by Philippine politicians to lethal creeds of racism posted by the gunman in the El Paso, Texas mass killing on a supposedly free speech online forum called 8chan which is being run by an American expat in the Philippines. While the Philippines is becoming an international hub for internet trolling that caters mainly to political clients, according to a Washington Post investigative report, other reports painted a blacker, deadlier picture of some web site operations originating from the country. These included a site called 8chan on which a 21-year-old white supremacist posted what has been described as a creed against Latinos and “invasion” by immigrants before opening fire with an assault rifle at shoppers in a mall in El Paso, where eight out of every 10 residents are Latino, killing 22 people, including children, and wounding dozens of others. The mass shooting was followed almost half-a-day later by another that claimed nine lives in Dayton, Ohio but the El Paso carnage offered clues to how the Philippines is becoming like a war room for internet operators that provide services akin to call centers but work in hard-to-detect anonymity which allows them to flaunt laws and shields them from accountability. Haven in the Philippines According to reports from Inquirer.net, Time.com, NY Times and Buzzfeed, the site 8chan is being operated by a retired American serviceman, identified as Jim Watkins and his son, Ronald, from an undisclosed location in the Philippines where Watkins chose to settle starting in 2007 after retiring from the US Army. In February 2017, in an interview with Buzzfeed, Watkins gave out his location simply as a pig farm in the Philippines from where he runs 8chan and another site, Goldwater, which featured materials promoting now US President Donald Trump and bashing his critics. Goldwater, according to the Buzzfeed report, also maintains a YouTube channel which greets visitors with a video clip featuring two “attractive” Filipino women, presumably Watkins’ employees, who recite reports “in accented English” that heap praises on Trump. Watkins appears in the video under a fictitious name, the Buzzfeed report said. The former casino investor now US President is now being taken to task by anti-gun violence and anti-racism advocates in the US for fueling hatred against colored immigrants through his vitriolic rhetoric which was believed to have inspired the El Paso gunman. Such vitriol, coming from white supremacists ignited by Trump, has found a home in the Philippines through the web site 8chan. Reports by INQUIRER.net, TIME.com and Buzzfeed identified the creator of 8chan as another American, who has also found a retirement haven in the Philippines, born-again Christian Fredrick Brennan who is now calling for the shutdown of the web site following the El Paso bloodshed. Brennan had passed on ownership of 8chan to Watkins in 2015. Loving Trump The site was kept alive by Watkins, who also owns the web hosting company N.T. Technologies, according to TIME.com, and made money through a now-defunct Japanese porn site. “It doesn’t make money, but it’s a lot of fun,” Buzzfeed quoted Watkins as saying in the February 2017 interview, describing 8chan. There was no mistaking who Watkins’ web sites—Goldwater and 8chan—were created for, if the Buzzfeed reports were to be read closely. Goldwater appeared as a news page that carried headlines like “Anti-Trump Liberals Throw Tantrums by Refusing to Pay Taxes,” according to Buzzfeed. Its YouTube channel carried a video with the warning “The Shadow Government is Rumored To Be Conspiring Against President Trump.” It’s not known how Watkins divided his time between maintaining the two web sites and his pig farm in the Philippines, but he posed for a photo for Buzzfeed carrying one of his piglets in his Philippine location. One of his videos showed him letting out smoke from vape as he sat in the middle of two Filipino women appearing to be busy with their laptops. The video was also presumably taken at the base of his operations in the Philippines. The Buzzfeed report said 8chan also played a major role in the US elections that saw Trump become President. The report said the site became home to memes and conspiracy theories that grew out of a ‘fever swamp” of 8chan’s message boards. One of these online forums was named Bureau of Memetic Warfare where Buzzfeed said: “White supremacists and internet trolls join in their disdain of social justice warriors and mainstream media.” Ron, Watkins’ son, was quoted in the Buzzfeed report as saying popular topics at 8chan during the US presidential elections were Hillary Clinton, Trump’s main election rival, Trump’s rallies, US Vice President Mike Pence. Some users, Ron said in the Buzzfeed report, were against Trump, though. Freedom to hate 8chan, Ron said in the Buzzfeed report, was where people have the freedom to say what they want. “If you want to say nigger, of course, you should be able to say that,” Ron was quoted by Buzzfeed as saying. He said he believed 8chan “helped get Trump elected,” according to Buzzfeed, but there’s no way to verify this. 8chan’s audience, however, was huge, Ron told Buzfeed. “You’ve got a million people a day looking at 8chan, on a good day. It’s huge,” Ron was quoted as saying. According to Watkins in the Buzzfeed report, a Trump ad ran on 8chan for “most of the election.” 8chan is now being investigated for other hate crimes in the United States, according to CNN, although it’s not clear if the investigation would carry over to the Philippines, where 8chan is being operated. PNP takes notice The Philippine National Police (PNP), however, has taken notice and PNP Chief Oscar Albayalde ordered the PNP Anti-Cybercrime Group (ACG) to verify reports on 8chan. The New York Times had described 8chan as a “megaphone for gunmen” and has been a “go-to resource for violent extremists,” saying at least three mass shootings this year — including the shooting at a synagogue in Poway, California, and mosque killings in Christchurch, New Zealand — have been announced on the website’s messaging board before the killings were carried out. Albayalde said shutting down 8chan, however, would follow a process of investigation and would depend on “what we will see.” The PNP, he said, has to write a formal complaint that would lead to the site’s closure. “It depends on its connections, on what we see, especially if there are locals involved,” said the Philippines’ highest-ranking police official. He noted, though, that mass shootings in the United States were not likely to happen in the Philippines because of “very strict” laws on gun ownership which require applicants to pass neuropsychiatric tests before being given permits to own or carry a gun. Such gun regulations, like those in the Philippines, are being called for by Democrats and anti-gun violence groups in the United States but are being thwarted by the powerful lobby of the US’ National Rifle Association. Trolls coming your way The controversial web site, however, is just but one of dozens of internet operations in the Philippines that is turning the country into what could be the nucleus of a much broader, a worldwide phenomenon now known as trolling. Other operations, seemingly benign but as impactful as providing an online home to racists, had taken the Philippine political world by its horns and appeared to have grown into a global industry threatening to extend its influence thousands of kilometers from the Philippines, according to Washington Post’s (Wapo) investigative report. Such operations, using the internet as both battleground and tool, could be described simply as “political manipulation,” the Wapo report said. It feeds mainly on falsehoods and embellished facts, the report said. Although run by mainly Filipino internet brains, the troll operations were already showing signs of going global especially during these times when the United States and other countries “move into another election cycle in the troll age,” said the Wapo report. “This is what disinformation will look like in the U.S. in 2020,” the report cited Camille Francois, chief innovation officer of the New York-based social network analysis firm Graphika. “The Philippines shows us trends that are headed this way,” Francois said in the Wapo report. The trolling operations may appear to be simple. Troll brains would hire internet habitues in the Philippines, where internet use is among the highest in the world, to create false accounts. According to the Wapo report, such an operation may require thousands of SIM cards which would be used to open fake accounts if real phone numbers were required by social network sites like Facebook or Twitter. Political missions These troll operators, according to Wapo, “are dramatically altering the political landscape in the Philippines with almost complete impunity—shielded by politicians who are so deep into this practice that they will not legislate against it and using the cover of established PR firms that quietly offer these services.” One trolling mission followed by Wapo in its report involved a candidate for the Philippine Senate for which the mission was to “cook up fake social media accounts to make it appear as if the candidate had a vast and fervent base of supporters.” “Another goal was to smear any critics,” the Wapo report said. “Across the Philippines, it’s a virtual free-for-all. Trolls for companies, Trolls for celebrities. Trolls for liberal opposition politicians and the government. Trolls trolling trolls,” it said. One troll mission observed by Wapo and cited in its report involved a candidate for a senator who hired 24-hour trolls to launch a barrage of messages of support for the candidate and bashing to his critics on Twitter and Facebook. “Fans leaped to his defense, debated his critics and sang praises for his leadership style,” said the Wapo report. “Except it is all an illusion, manufactured by hundreds of fake accounts all meticulously tracked on a spreadsheet,” the report said. The candidate lost but came close. The debacle, however, did not prove to be disheartening for the troll operators. Business expansion “Several paid troll farm operations and one self-described influencer say they have been approached and contracted by international clients, including from Britain, to do political work,’ said the Wapo report. “Others are planning to expand overseas, hoping to start regionally.” Wapo also reported on another side of the operations, called “positive trolling,” with one operator telling the newspaper that positive trolling is being used to counter online attacks on Philippine President Rodrigo Duterte, whose operators also relied heavily on social media during the 2016 presidential campaign. Duterte supporters, according to the Wapo report, had “turned online intimidation into an art.” The troll operator interviewed by Wapo for its report said he watched from the sidelines in 2016 “when Duterte and his allies harnessed the power of self-declared patriots online and turned them into an organized cyber mob—the Diehard Duterte Supporters, or DDS.” Some Philippine trolls, the Wapo report said, operate unnoticed in coffee shops, like Starbucks which offer wi-fi connection for coffee, where online battles between rival trolls sometimes take place. So the next time you see someone intensely focused on his or her laptop, mobile phone or tablet at your favorite coffee shop, don’t be surprised if he or she is a troll. With a report by Cathrine Gonzales xxx xxx xxx HATE SPEECH ONLINE by David L. Hudson Jr., First Amendment Scholar, and Mahad Ghani, First Amendment Center Fellow embedded below in red font and can be found at: Hate Speech Online
xxx Some Web sites deny that the Holocaust occurred. Others promote the beating of gays and lesbians. Still others rail against Muslims and Islam in the United States, or are anti-Christian. The 2016 election illuminated the extent that “fake news” had infiltrated society, resulting in incidents like an man armed with an assault rifle entered a family pizza restaurant because of false reporting he had read online in the “Pizzagate” incident. Many such sites target young people and seek to promote their hateful ideologies. “From cyberbullying to terrorists’ use of the Internet to recruit and incite, Internet hate speech is a serious problem,” said Christopher Wolf, immediate past chair of the International Network Against Cyber-Hate, in an e-mail interview. “The most notorious hate crimes of late — such as the shooting at the Holocaust Museum (in Washington, D.C.) — were committed by individuals who used the Internet to spread hate and to receive reinforcement from like-minded haters, who made hatred seem normal and acceptable.” Some contend that hate speech infringes on the 14th Amendment’s guarantee of equal protection under the law. Alexander Tsesis, for example, wrote in a 2009 article that “hate speech is a threatening form of communication that is contrary to democratic principles.” 1 However, the First Amendment provides broad protection to offensive, repugnant and hateful expression. Political speech receives the greatest protection under the First Amendment, and discrimination against viewpoints runs counter to free-speech principles. Much hate speech qualifies as political, even if misguided. Regulations against hate speech are sometimes imposed because the government (at any level) disagrees with the views expressed. Such restrictions may not survive constitutional scrutiny in court. Furthermore, the U.S. Supreme Court in Reno v. ACLU (1997) noted (albeit in a non-hate speech context) that the Internet is entitled to the highest level of First Amendment protection, akin to the print medium. In other words, online hate speech receives as much protection as a hate-speech pamphlet distributed by the Ku Klux Klan. Given these factors — high protection for political speech, hostility to viewpoint discrimination and great solicitude for online speech — much hate speech is protected. However, despite its text — “Congress shall make no law … abridging the freedom of speech” — the First Amendment does not safeguard all forms of speech. UNPROTECTED CATEGORIES Unless online hate speech crosses the line into incitement to imminent lawless action or true threats, the speech receives protection under the First Amendment. INCITEMENT TO IMMINENT LAWLESS ACTION In Brandenburg v. Ohio (1969), the Supreme Court said that “the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Most online hate speech will not cross into the unprotected category of incitement to imminent lawless action because it will not meet the imminence requirement. A message of hate on the Internet may lead to unlawful action at some indefinite time in the future — but that possibility is not enough to meet the highly speech-protective test in Brandenburg. xxx TRUE THREATS Some online hate speech could fall into the unprotected category of true threats. The First Amendment does not protect an individual who posts online “I am going to kill you” about a specific individual. The Supreme Court explained the definition of true threats in Virginia v. Black (2003) — in which it upheld most of a Virginia cross-burning statute — this way: “‘True threats’ encompass those statements where the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals. The speaker need not actually intend to carry out the threat. Rather, a prohibition on true threats protect(s) individuals from the fear of violence and from the disruption that fear engenders, in addition to protecting people from the possibility that the threatened violence will occur.” The Court in Virginia v. Black reasoned that crosses burned with an intent to intimidate others could constitutionally be barred as provided in the Virginia law. xxx Thus, online hate speech meant to communicate a “serious expression of an intent” to commit violence and intimidate others likely would not receive First Amendment protection. A few cases have applied the true-threat standard to online speech. In Planned Parenthood v. American Coalition of Life Activists (2002), the 9th U.S. Circuit Court of Appeals held that some vigorous anti-abortion speech — including a Web site called the Nuremberg Files that listed the names and addresses of abortion providers who should be tried for “crimes against humanity” — could qualify as a true threat. The 9th Circuit emphasized that “the names of abortion providers who have been murdered because of their activities are lined through in black, while names of those who have been wounded are highlighted in grey.” Similarly, the 5th U.S. Circuit Court of Appeals ruled in U.S. v. Morales (2001) that an 18-year-old high school student made true threats when he wrote in an Internet chat room that he planned to kill other students at his school. xxx SOCIAL MEDIA AND THE REASONABLE PERSON STANDARD The Supreme Court turned their eyes towards social media to determine whether speech online could constitute a threat in Elonis v. United States (2015). The case involved an individual that posted rap lyrics on his Facebook page in which he threatened to kill his ex-wife. He was charged for conveying threats across state lines. When the case arrived to the Supreme Court, the case revolved around determining whether a post on social media crossed into the realm of the True Threat standard. The Court chose to apply a reasonable person standard to determine whether the threshold had been met. They ultimately ruled that a reasonable person would not have found the rap lyrics to be a true threat, and reversed the decision. Since the Supreme Court decision, there have been cases filed when children have used things like bomb emojis and have faced penalties. The Supreme Court choosing to apply a reasonable person standard will likely guide these cases moving forward. CONCLUSION If hateful Internet communications do not cross the line into incitement to imminent lawless action or a true threat, they receive First Amendment protection. The First Amendment distinguishes the United States from other countries. Alan Brownstein and Leslie Gielow Jacobs, in their book Global Issues in Freedom of Speech and Religion, write that the U.S. is a “free[-]speech outlier in the arena of hate speech.” Many other countries criminalize online hate speech. With social media and the Internet increasingly resulting in real world acts of violence, and as a recruiting tool for terrorists, it is likely the law will change to address the changing times. Wolf, chair of the Anti-Defamation League’s Internet Task Force, said much could be done to counter online hate speech besides criminalizing it. “There is a wide range of things to be done, consistent with the First Amendment, including shining the light on hate and exposing the lies underlying hate and teaching tolerance and diversity to young people and future generations,” he said. “Counter-speech is a potent weapon.” With where the law currently stands, hate speech is protected so long as it stays in the realm of just speech. The great Supreme Court Justice Oliver Wendell Holmes wrote that “if there is any principle of the Constitution that more imperatively calls for attachment than any other, it is the principle of free thought—not free thought for those who agree with us but freedom for the thought that we hate.” The Constitution ensures freedom of speech for all by fighting to protect even the most vile speech of all. NOTES 1 Alexander Tsesis, “Dignity and Speech: The Regulation of Hate Speech in a Democracy,” 44 Wake Forest L. Rev. 497, 502 (2009). 2 Tiffany Kamasara, “Planting the Seeds of Hatred: Why Imminence Should No Longer Be Required to Impose Liability on Internet Communications,” 29 Capital University L. Rev. 835, 837 (2002). 3 Jennifer L. Brenner, “True Threats — A More Appropriate Standard for Analyzing First Amendment Protection and Free Speech When Violence is Perpetrated over the Internet,” 78 North Dakota L. Rev. 753, 783 (2002). 4 John P. Cronan, “The Next Challenge for the First Amendment: The Framework for an Internet Incitement Standard,” 51 Catholic University L. Rev. 425 (2002). xxx xxx xxx Advisory for online safety from Rivest–Shamir–Adleman or rsa.com at https://www.rsa.com/content/dam/premium/en/white-paper/2019-current-state-of-cybercrime.pdf …embedded below in olive green font: xxx (C )ybercriminals are increasingly using mobile to ply their trade, as evidenced by a 680 percent increase in fraud transactions from mobile apps between 2015 and 2018… This report explores this digital revolution that both sides are experiencing and examines its implications for fraud and other forms of cybercrime in 2019. We will look at the digital developments, market forces and regulatory pressures that are driving this shift in how fraudsters and others commit their crimes, as well as how anti-fraud forces fight them. Based on insights gleaned from RSA research and other sources, we will focus on (the following) trends: Trend #1 CYBERCRIME, GROWING PREFERENCE FOR MOBILE Fraud in the mobile channel has grown significantly over the last several years, with 70 percent of fraud transactions originating in the mobile channel in 2018. In particular, fraud from mobile apps has increased 680 percent since 2015. In another indication of the growing popularity of mobile as a channel for cybercrime, the use of rogue mobile applications to defraud consumers is on the rise. RSA identified an average of 82 rogue mobile applications per day last year across most popular app stores. We expect the popularity of the mobile channel for fraud will continue through 2019, especially as cybercriminals keep finding ways to introduce tactics and technologies such as phishing and malware to the mobile channel. For example: • Smishing uses SMS texts rather than email to deliver phishing messages aimed at getting victims’ account credentials, credit card numbers, etc. • Mobile 2FA phishing is a variant of smishing in which the account takeover attempt is specifically designed to bypass two-factor authentication. • Mobile malware works like traditional malware to attack and disable a user’s device, but specifically targets mobile devices, with developers constantly enhancing the malware to keep up with new versions and security patches of mobile operating systems. The RSA Anti-Fraud Command Center expects these and other forms of mobile-based cybercrime to evolve and grow even more prevalent as organizations continue to leverage the mobile channel to deliver new digital services to customers. Watch for: Cross-Channel Vulnerabilities While fraud growth in the mobile channel continues to trend upward, it is by no means the only digital channel that fraudsters are exploiting. As organizations continue to introduce innovative products and services online, in the cloud and across other digital channels, cybercriminals can be expected to seize on these developments to launch more attacks. In this scenario, we see that the very advances that fuel innovation and growth of digital channels also fuel cross-channel fraud. This is one of the ways in which digital transformation creates both digital opportunity and digital risk. Consider the move to an open API economy, in which organizations can more easily share data … in the interest of customer convenience. This results in innovations such as consumers being able to share account information with apps and platforms of their choice. For example, a consumer can choose to securely share financial data with an app that provides financial planning. But it also creates a vulnerability across channels that cybercriminals will be eager to exploit. Or think about how an increase in cybercrime can accompany the introduction of a new digital service. For example, the RSA Anti-Fraud Command Center saw phishing attacks increase 178 percent after leading banks in Spain launched instant transfer services. Cybercriminals are always alert to these types of developments and quick to seize on them for their own nefarious purposes. Trend #2 USING LEGITIMATE PLATFORMS FOR ILLICIT ACTIVITY Social Media: The New Public Square for Fraud In the 2018 Current State of Cybercrime, RSA reported on a fast-growing trend of cybercriminals relying on Facebook, Instagram, WhatsApp and other legitimate social media and messaging platforms to communicate with each other and sell stolen identities, credit card numbers and other ill-gotten gains. Our prediction that this trend would expand and continue has been borne out. By the end of last year, social media fraud attacks had increased 43 percent, as cybercriminals continued to find new ways to exploit social media platforms for gain. One such development involves the Telegram bot feature that is being used by cybercriminals to facilitate and automate their activities. Some provide automated tools for common actions to enhance communications, whereas others provide actual fraud services via online stores. RSA Anti-Fraud Command Center findings suggest trading in stolen identities will gain even greater momentum, with more stores likely opening on legitimate platforms to sell this type of data. Given the ease of use, absence of fees and other benefits of these platforms, continuation of this trend in 2019 should come as no surprise. Using Mobile to Stay Low-Profile RSA is seeing cybercriminals use mobile not just as a vehicle for launching phishing, malware and other attacks but also as a platform for resources that make it easier for them to carry out criminal activity and get away with it. In addition to using legitimate mobile apps for nefarious purposes, they are also developing their own apps to increase their anonymity, avoid detection and otherwise keep anti-fraud forces from tracking them down and exposing what they’re doing. We can reasonably anticipate that this activity will continue to grow as cybercriminals become increasingly emboldened by their successes. The Advantages of Blockchain for Cybercriminals RSA reported last year on the use of a blockchain-based domain name system (DNS) to host sites such as stores that sell credit card information or other stolen data. Unlike traditional DNS addresses, which are subject to oversight by governing organizations like ICANN, blockchainbased DNS addresses have no oversight. That makes it harder for law enforcement to interfere with their operations, including taking down sites, and that makes the popularity of blockchain among cybercriminals likely to grow. This is one reason RSA anti-fraud experts are predicting more fraud websites will be utilizing blockchain domains in 2019. Watch for: Exploiting On-Demand Services Platforms What’s the next frontier for cybercriminals looking for legitimate online platforms they can exploit? CNBC recently reported on the use of on-demand services platforms such as Uber and Airbnb to launder money made from credit card fraud: “Money laundering is an essential element in the proliferation of cybercrime, as much of the funds come in the form of cryptocurrencies with a chain traceable to crime.” Using on-demand platforms to hide ill-gotten gains is one thing; using them to actually commit fraud is another. But it happens: CNET has reported on Uber drivers being victimized by fraudsters, who impersonate the company’s driver support team to cancel a ride, get the driver’s Uber account credentials and then use them to steal the wages in the account before they are transferred to the driver’s bank. xxx xxx xxx ORIENTATION PLS SUBMIT AVATAR & PUBLIC PROFILE ELECTRONICALLY Use nicknames only, or pseudonyms, or “aliases”, do not use your full names. (Deadline August 15, 2019 at 7pm: Class members who fail to meet the deadline will be considered as not being present in class)
Happy new term, everyone! We will discuss the “Intro to the Course” next week as embedded above. BEFORE THAT, please accomplish the following requirements, otherwise, you will not be allowed a seat in class next meeting until accomplishment of the following: Please submit your avatar and public profile electronically – failing which, you will be excused so you can submit the requirement. PLEASE USE YOUR NICKNAMES ONLY, OR PSEUDONYMS, OR PET NAMES, OR “ALIASES” (hopefully, they are not aliases from a criminal record:) ). This is a public site and you are advised not to use your full names. FOR THE FIRST DAY OF CLASSES: In order to have an organized flow of class discussion: Everyone is required to attend the orientation on the first day of classes – even those who have not completed their payment. Experience shows that those who fail to attend the orientation fail to be aware of the requirements and class policies, fail to get their topics for reporting (for 60 points) and end up DISRUPTING THE CLASS with their noisy cellphones, noisy inquiries, and abrupt behavior in trying to get out of the classroom to comply with the requirements. Students who show inability to comprehend words will be asked to drop the class before wreaking more havoc. Students will always be held responsible for whatever they miss as a consequence of their being late or absent, and are requested not to harangue the handling faculty to be given special treatment by way of a “personalized briefing”, or update, or to get topics. Students who persist in refusing to comprehend words will be asked to drop the class. The class record and class scorecards of this class are electronic (with one print copy as final backup). My avatar and public profile are in this site, in the “About” widget. The widgets appear at the foot of the site; the “About” widget is eleventh, scroll down. ADMINISTRATIVE MATTERS: As stated, the class records are electronic and will be based on scores arising from compliance with the requirements, to be centralized electronically in the department file. Pls submit electronically your avatar and public profile by embedding them or linking them in this site, in this post, in the comments section (you may post them earlier, before classes). You may use the computers in the department, or the computers in the classrooms, or the free and public computers in the corridors and lobby, or the free and public computers in the library, or your own devices (the college has a free, public wifi). The avatar is your digital public photo. For this class, do not submit an image of a cartoon character or a computer-generated image unless you want to be considered a winged creature in class. Do not submit a microscopic, dot-size photo unless you want a dot score for all the requirements. Pls make your avatar at least the usual 1” x1”. Thank you. The public profile is the public description of yourself, the profile that you use in your public sites. Use your nicknames only, do not submit your full names in this site. In your public profile, pls type as heading the designation of the class you are enrolled in (J101, or Ethics, or Media Law, or Grad Sch) — this will make the work of the department easier. Then, pls include the following information in your “description”: 1.your course; 2.your favorite book or novel of all time (and state why); 3.your favorite film of all time (and state why); 4.your favorite media practitioner of all time (any medium: newspaper, broadcast, multimedia, film, social media, etc), (and state why); 5.your favorite song/ music/ band/ songwriter of all time (and state why); 6.your favorite meal of all time (and state why). 7.”hobbies”, if any (optional). Those who do not have these will be asked to show or perform their own original composition in class as a description of themselves. There are several ways of producing your public profile: 1.Thru your own public site (thru free sites such as FB, Twitter, tumblr, wordpress, blogspot, etc) 2.or thru Gravatar (a free app/site), 3.or thru about.com (another free site/app) (although there is nothing absolutely free in the internet: advertisers buy your info, so just use your nicknames, not your full names.) If you’re using Gravatar or about.com, the app automatically shows your avatar anywhere you post in the net, and in the comments section of this site, so you won’t have to embed, separately, your photo. There are two ways of submitting them in the comments section of this post: 1.By embedding, as in-line text; or 2.by linking the url of your own site (pasting the url of your site in the comments section). Simply click the comments box at the end of this post, then type: you may embed your public profile and avatar, or paste the link to your site where your public profile and avatar appear. If you are a recluse, or have zero presence in the internet, you may get the class email address from the department and email the requirement. You will need to FOLLOW UP the department to submit it to the handling faculty – the disadvantage of this procedure is — as experience shows – this takes more administrative steps; any delay will be counted against the student for failure to follow up efficiently. A one-pager directory will be routed manually (print) in class where the student will be asked to write their email address and “name of person to contact in case of emergency” and that person’s contact info. This document is confidential and no one is allowed to borrow or to photocopy it. Those who fail to submit an avatar and public profile will not be allotted an electronic classcard and will not appear in the electronic class record. Those who are not allotted a classcard and do not appear in the class record will be considered a fictitious character. Their grade will be posted by Wave, only after they defeat Malekith.