Social media: Now, even babies tweet

Many parents feel it’s essential to snap up Twitter handles and Gmail accounts for their kids before someone grabs those names.

“Harper Estelle Wolfeld-Gosk has 6,282 Twitter followers,” said Joe Coscarelli in NYMag.com. “She’s 2 weeks old.” The infant daughter of Today show correspondent Jenna Wolfe is just one of thousands of kids who have Twitter accounts that are written in their voices but are “set up, maintained, and authored by parents.” Here’s a sample of little Harper’s tweets: “Pooped AND pee’d on Dr’s changing table. Everyone laughed.”

Why bother with such twaddle? Blame both “everyday parental pride” and “tech-savvy paranoia.” Many parents feel it’s essential to snap up Twitter handles and Gmail accounts for their kids before someone grabs those names. Once those accounts are established, parents can’t resist the temptation to put wisecracks in their kids’ mouths. Some critics are calling this “oversharenting’’—sharing too much information about kids online, said Eliana Dockterman in Time.com. One study found that 94 percent of parents post pictures of their kids on the Internet, with newborns uploaded to Facebook an average of 57.9 minutes after their birth.

You won’t find my daughter there, said Amy Webb in Slate.com. My husband and I have decided we will keep all photos of and references to her off the Internet until she’s mature enough to decide what to post. Exposing your child on social media poses huge issues for his or her “future self.” Do you really want photos of your 5-year-old in a bathing suit circulating permanently on the Internet? Do you want Google and Facebook to start compiling data about your kids before they can even crawl, to be shared with advertisers or intrusive government agencies or unknown searchers? “It’s inevitable that our daughter will become a public figure, because we’re all public figures in this new digital age.” But it should be her, not us, who decides what’s in that public identity.

So, parents, please spare us, said Mary Elizabeth Williams in Salon.com. All these babies tweeting and posting supposedly amusing observations on Facebook really is a bit much. “It’s like we all woke up one day in a mass version of Look Who’s Talking.” Children are not meant to be a “witty accessory” to your own online life. Besides, said Caity Weaver in Gawker.com, making sure your kid has the right handle on a Facebook and Instagram account 20 years from now is laughably shortsighted. It’s likely to be as useful as 1990s parents stockpiling “CompuServe screen names and laser disc players.”

Subscribe to the NetLingo Blog via Email or RSS here!



It's Time for Emojis to be More Diverse

"If these emoji are going to be the texting and Twitter standard, we think it'd be cool if they better reflected the diversity of the people using them" says Chris Gayomali. There are nine cat-face emotions, but not one black person.

Emojis have now fully embedded themselves into our digital vocabulary, showing up in everything from forgettable Katy Perry videos to comedians tapping rap lyrics into their iPhones. The sentiment behind emojis is nothing new, of course. It's why we started pairing colons with closed parentheses and cocking our heads to the side in the first place.

Now, should you find yourself in a situation in which words do not suffice, the iOS keyboard offers hundreds of emoji options for you to pick from. There are several pixelated yellow faces representing the full spectrum of boredom, for instance. There are at least 10 variations for hearts. There are emojis of gay couples holding hands, a smiling turd, demon masks, and a beaming cherub. There are white faces — both young and old — as well as tokenistic caricatures of what appear to be an Asian boy, an Indian man, and a family of Latinos.

What there aren't, however, are any emojis for black people. Not a single one.

It's an egregious omission, and one that's drawing the ire of a petition circulating on DoSomething.org, as Fast Company initially reported. The petition is calling for Apple to update its iOS keyboard to more accurately reflect the multitude of people who use it. It states:

Of the more than 800 emojis, the only two resembling people of color are a guy who looks vaguely Asian and another in a turban. There's a white boy, girl, man, woman, elderly man, elderly woman, blonde boy, blonde girl and, we're pretty sure, Princess Peach. But when it comes to faces outside of yellow smileys, there's a staggering lack of minority representation.

The conspicuous absence of black faces on the emoji keyboard is both "deeply troubling and probably racist," says Andy Holdeman at PolicyMic. The "easy answer" is that emojis were developed in Japan, where there aren't very many black people. But that's a cop out, argues Holdeman, considering there are also two different icons for camels. Yep. Camels.

Emoji was originally developed by Shigetaka Kurita, who engineered the expressive reaction faces many years ago, around the time Windows 95 first began taking off in Japan. In 2010, they were added to the Unicode Standard in other countries, including the United States.

Calls for a more diverse emoji palette have been building in volume for a few months now. Even Miley Cyrus — whose recent indiscretions appropriating ratchet culture haven't exactly endeared her to the black community — rallied behind the cause back in December.

Support for better icon representation has been building steadily. Back in February during Black History month, users took to Twitter, Instagram, and other digital formats to call for more emoji diversity.

A lack of representation in something as inconsequential as dumb faces we text to each other is a firm reminder that racism isn't always explicit; more often, racism rears its head by marginalizing cultural influence in small, stubbornly ugly ways. "If these Emoji are going to be the texting and Twitter standard," write the petition's authors, "we think it'd be cool if they better reflected the diversity of the people using them." You can sign it over at DoSomething.org.

Subscribe to the NetLingo Blog via Email or RSS here!




No nudity after all: Google bans porn from Glass

So long, "T--s and Glass" says Chris Gayomali, Google is keeping it clean.
Google is showing that it's willing to be uncharacteristically draconian in order to endear Glass to the general public. And now it's borrowing a page right out of Apple's porn-free playbook.

After adult app developer MiKandi debuted its "T--s & Glass" app — which allows the Glasserati to record, share, and rate pornography hands-free — Google snuck in and updated its developer policy to bar sexy-time apps from the headset completely:

We don't allow Glassware content that contains nudity, graphic sex acts, or sexually explicit material. Google has a zero-tolerance policy against child pornography. If we become aware of content with child pornography, we will report it to the appropriate authorities and delete the Google accounts of those involved with the distribution.

Although the Google Play store says it prohibits pornography, the Android marketplace is still flooded with apps with titles like "Big Boobs nude - Videos" and "Tear sexy girl's clothes."

As for MiKandi, it's back to the drawing board. The company promises to find a workaround so the truly dedicated can still ogle naked people inside a tiny cube of clear plastic. "When we first picked up our device, we were very careful to comb through all of Google's terms, policies, and developers' agreement to make sure we were playing within their rules," Jennifer McEwen, co-founder of MiKandi, told ABC News. "That was important to us to play in Google's boundaries."

Subscribe to the NetLingo Blog via Email or RSS here!

The Biometrics Boom: Technology can identify you by unique traits in your eyes, your voice, and your gait. Is there cause for alarm?

What is biometrics? It is the science of identifying individuals by their unique biological characteristics. The best known and earliest example is fingerprints, used by ancient Babylonians as a signature and by police since the turn of the 20th century to identify criminals.

But in the last decade there has been a boom in more advanced biometric technology, allowing people to be identified, and sometimes remotely tracked, by their voices, the irises of their eyes, the geometry of their faces, and the way they walk.

The FBI is consolidating existing fingerprint records, mug shots, and other biometric data on more than 100 million Americans into a single $1.2 billion database. When it is completed, in 2014, police across the country will theoretically be able to instantly check a suspect against that vast and growing array of data.

Law-enforcement officials are enthusiastic about this growing power, while civil libertarians are aghast. "A society in which everyone's actions are tracked is not, in principle, free," said William Abernathy and Lee Tien of the Electronic Frontier Foundation. "It may be a livable society, but would not be our society."

How did the boom come about? The age of terrorism has created enormous interest in — and lowered resistance to — identifying and tracking individuals in a very precise way. "Biometrics represent what terrorists fear most: an increased likelihood of getting caught," said Homeland Security spokesman Russ Knocke.
Since 2002, the government has fingerprinted all foreign visitors to the U.S. at airports and borders, collecting approximately 300,000 prints per day. In Afghanistan and Iraq, U.S. forces have gathered iris data from 5.5 million people, to identify suspected insurgents and prevent infiltration of military bases. Fueled by the growth of iris scans in particular, the global biometrics industry in 2013 has revenues of $10 billion — and is expected to double that in five years.

How do iris scans work? Every person has unique patterns within the colored part of his or her eye. A device scans your iris and compares it with photos of irises on record, identifying people with accuracy rates of 90 to 99 percent, depending on the conditions and system used. Iris scanners are now widely used on military bases, in federal agencies, and at border crossings and airports.

An improved iris scan version can remotely assess up to 50 people per minute, making it possible to scan crowds for known criminals or terrorists whose iris patterns are on file. Facial recognition technology, which identifies people through such geometric relationships as the distance between their eyes, has also come a long way. The technology is still only about 92 percent accurate, but "the error rate halves every two years," said facial recognition expert Jonathon Phillips.

What other biometrics are there? The U.S. military is already using radar that can detect the unique rhythm of a person's heartbeat from a distance, and even through walls. That technology is being developed for use in urban battlefields, but may one day become a law-enforcement tool.

A person's gait, too, is completely individual, and the technology to recognize it has advanced to the point where a person can be identified by hacking into the sensor that tracks the movement of the cellphone in his or her pocket. "Because it does not require any special devices, the gait biometrics of a subject can even be captured without him or her knowing," said Carnegie Mellon University biometrician Marios Savvides.

What are the privacy implications? Civil liberties groups warn that if these technologies are not restrained by law, they could be used in truly Orwellian ways. No laws currently limit data collection from biometric technology or the sharing of that data among federal agencies.

Law-enforcement officials can use driver's license photos to identify or hunt for suspects, for example; the government or private companies could collect a person's biometric data without his consent and use it to track his movements. "That has enormous implications, not just for security but also for American society," said Chris Calabrese of the American Civil Liberties Union.

Is there any turning back? Probably not, especially now that private companies are embracing biometrics.
Already, TD Bank and Barclays Bank are using voice recognition technology to verify account holders. In the not-too-distant future, we'll be able to start our cars with our fingerprints, use facial recognition or iris scans instead of passwords on smartphones and other electronic devices, and have doctors check our medical records by scanning our faces.

These uses of biometrics will provide convenience and efficiency, but at a steep price in privacy. Iris technology that reads our eye movements, for example, will be able to determine what we look at in stores — then use that data to create highly personalized advertising aimed at what we've displayed interest in. "For companies and governments," said the ACLU's Jay Stanley, "the incentives associated with biometrics all point the other way from privacy."

Here in the U.S., proposals to put biometric data on Social Security cards have faltered because of concern among civil libertarians and conservatives over government overreach. But in much of the developing world, the concept of personal privacy carries less legal and cultural weight, and there a biometric revolution is taking place, with some 160 massive data-gathering projects underway.

Until the 21st century, more than a third of people in developing countries were not registered in any way at birth, making it hard for them to open bank accounts, get government benefits, or vote. Biometric IDs could change that.

India is taking the fingerprints and iris scans of all 1.2 billion of its citizens. Nandan Nilekani, the founder of outsourcing firm Infosys and the project's leader, says being identified will allow India's largely anonymous masses to claim services to which they're entitled under the law, rather than being forced to bribe bureaucrats. "Unique identification is a means to empowerment," he said.

Subscribe to the NetLingo Blog via Email or RSS here!

Your Outraged Internet Comments are only Making YOU Angrier

Don't like this blog? Probably best to keep it to yourself, according to Keith Wagstaff. Someone is always wrong on the Internet. Don't let it get to you.

Facebook, blogs, Reddit, the comments section of a website — no corner of the Internet is free from online rants. But while venting online might feel cathartic, it could actually make you angrier in the long run, according to a new study by researchers at the University of Wisconsin-Green Bay.

As any online journalist knows, there are certain people who seem to revel in anonymously venting their anger. But what beleaguered writers may not be aware of is that there are two kinds of venters, according to the study: Those who feel relaxed and calm after reading and writing online rants, and those who become sad and upset.

The study did not determine why certain people feel better after indulging in outrage, but it did find that those people eventually ended up angrier.

Not only that, but the people who felt compelled to share their rage through a series of tubes claimed that "they experienced frequent anger consequences, averaging almost one physical fight per month and more than two verbal fights per month."

So yes, your suspicions were correct, that person insulting you every day on your blog probably does have an anger management problem.

The study prompts the question: Is there any benefit to writing seething rants online?

Not really. This jibes with past studies on Internet "discourse."

"At the end of it you can't possibly feel like anybody heard you," Art Markman, a professor of psychology at the University of Texas at Austin, told Scientific American last year. "Having a strong emotional experience that doesn't resolve itself in any healthy way can't be a good thing."

In the end, seeking out a flesh-and-blood human being to hash out a political argument with will probably make you feel better than writing in all caps on the Internet.

Subscribe to the NetLingo Blog via Email or RSS here!

'Deflower,' 'pornography,' and 'marijuana': The taboo words your iPhone won't spell

In case you weren't aware, Apple is a family company. Chris Gayomali informs us, Apple is keeping it clean.

The iPhone's autocorrect feature has certainly given the world its fair share of chuckles and book deals of questionable merit. But in less humorous news, it turns out that the iPhone refuses to preemptively fix a handful of so-called "sensitive" words, including "abortion," "rape," "murder," and more. A new experiment by The Daily Beast's NewsBeast Labs used a computer program to go through some 14,000 words that iOS 6, at factory settings, won't change if you make a slight spelling mistake:

In fact, previous iOS software, before spell check was introduced in April 2010, autocorrected many of the words the latest software won't. "Abortion," "rape,” "drunken," "arouse," "murder," "virginity," and others were accurately autocompleted under iOS 3.1.3.

Currently all new iOS devices ship with iOS 6, which includes spell check. Anyone who has upgraded their iOS since fall 2012 will have the latest iOS 6 software.

It's a bit strange, but it isn't entirely unexpected. Apple, which naturally refused to comment on the matter, is no stranger to pearl-clutching, as evidenced by its adamant insistence that the App Store remain PG-13.

Yet Apple's inability to comprehend that adults sometimes use adult language is oddly out of touch with reality. "My iPhone is not a dimwit. It seems to grasp and memorize names and phrases I use repeatedly. These may not have any significance to anyone beyond those who know me intimately," wrote CNET's Chris Matyszczyk in a 2012 column. "Yet somehow, it doesn't know s---."

That said, if you use a strange word not in Apple's standard dictionary enough (iOS 6 and up), it should save your dirty "slang, inside jokes, and abbreviations" in iCloud across your devices, according to Gizmodo. "S---head," for instance.

Head over to The Daily Beast for the full list of words your autocorrect doesn't recognize by default, which includes an eyebrow-raising array of Shakespearean gems and sailor-speak, such as "cuckold," "deflower," "marijuana," "pornography," and "prostitute."

Subscribe to the NetLingo Blog via Email or RSS here!

Why it's so difficult to ban revenge porn

Almost everyone hates it. But state legislatures are having a tough time fighting it. "Is Anyone Up" may be gone, but there are plenty of other revenge porn websites lingering in the dark recesses of the Internet.

Before it was shut down in 2012, the website Is Anyone Up was the leading publisher of revenge porn, defined as cell-phone nudes (or sexts) submitted by scorned exes, embittered friends, and/or malicious hackers posted next to the subject's name, location, and social media information.

The resulting outrage directed at the site and its founder, Hunter Moore (whom Rolling Stone called "The Most Hated Man on the Internet"), made it look like bans on revenge porn would be an easy sell to lawmakers.

So far, it hasn't turned out that way. Only New Jersey has a law on the books specifically targeting revenge porn.

In 2013, California is looking to punish anyone who posts nude or partially nude images of subjects who had a "reasonable expectation of privacy," including when the photographer originally had the subject's consent. If the bill is passed (it was), it would make posting revenge porn a misdemeanor punishable by up to a year in prison and a $2,000 fine.

Considering no legislator wants to be considered "pro-revenge porn," it should sail through the legislature. However, that is what lawmakers in Florida and Missouri thought before similar legislation stalled last year.

So what's the problem?

The issue of who is responsible for the photos is a big stumbling block, writes Patt Morrison at the Los Angeles Times:

As with an actual paper-and-ink letter, does the recipient of the photo own the actual physical picture but not the content and therefore the right to reproduce it anywhere? Is the owner of the photo the person who took it or the person who appears in the photo? What if it’s one and the same, a "selfie"?

Revenge porn sites also have a lot of the protections enjoyed by sites like Facebook and Flickr. Under Section 230 of the Communications Decency Act, notes Somini Sengupta at The New York Times, third-party platforms are usually not liable for content generated by their users.

If prosecutors can't go after sites, they would have to go after users — who are often anonymous. If an image goes viral, that further complicates the issue of who is responsible for posting an illegal photo.

There are also First Amendment concerns, which have been raised by the American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF).

"Whenever you try and criminalize speech, you have to do so in the most narrowly tailored way possible," EFF lawyer Nate Cardozo tells KABC Los Angeles. He worries that Caifornia's bill "also criminalizes the victimless instances" — such as sites that host legal, consensual pornography.

Regardless of the legal complications, passing the bill sends a message to police and prosecutors, argues Danielle Citron, a law professor at the University of Maryland. "It signals taking the issue seriously, that harms are serious enough to be criminalized," he tells the Times.
Subscribe to the NetLingo Blog via Email or RSS here!
- See more at: http://www.netlingo.com/#sthash.gMfEEfsX.dpuf