YouTube suspends ads on Logan Pauls channels after recent pattern of behavior in videos

More problems and controversy for Logan Paul, the YouTube star who caused a strong public backlash when he posted a video of a suicide victim in Japan. Google’s video platform today announced that it would be pulling advertising temporarily from his video channel in response to a “recent pattern of behavior” from him.

This is in addition to Paul’s suspensions from YouTube’s Preferred Ad program and its Originals series, both of which have been in place since January; and comes days after YouTube’s CEO promised stronger enforcement of YouTube’s policies use a mix of technology and 10,000 human curators.

Since coming online again after a one-month break from the service in the wake of the Japanese video, in addition to the usual( asinine) content of his videos, Paul has tasered a rat, indicated swallowing Tide Pods, and, according to YouTube, intentionally tried to monetize a video that clearly infringed its guidelines for advertiser-friendly content( we’re asking if we can get a specific reference to which video this might be — they all seem fairly offensive to me, so it’s hard to tell ).

“After careful consideration, we have decided to temporarily suspend ads on Logan Paul’s YouTube channels, ” a spokesperson said to TechCrunch in an emailed statement elaborating on the Tweet. “This is not a decision we attained gently, however, we believe he has exhibited a pattern of behavior in his videos that induces his channel not only unsuitable for advertisers, but also potentially damaging to the broader inventor community.”

Yesterday, during a series of “Fake News” hearings in the U.S. led by a Parliamentary committee from the UK, YouTube’s global head of policy Juniper Downs said that the company had detected no evidence of videos that pointed to Russian interference in the Brexit vote in the UK, but the platform continues to face a lot of controversy over how it vets content on its site, and how that content subsequently is used unscrupulously for financial gain.( YouTube notably was criticised for taking too long to react to the Japanese video that started all of Paul’s ache .)

This is a contagion problem for YouTube: not only do situations like his damage public perception of the service — and potentially have an impact on viewership — but it could impact how much the most premium brands choose to invest on ads on the platform.

Interestingly, as YouTube continues work on ways of improving the situation with a mix of both machine learning and human approaches, it appears to be starting to reach beyond even the content of YouTube itself.

The Tide Pod suggestion came on Twitter — Paul wrote that he would swallow one Tide Pod for each retweet — and appears to have since been deleted.

Generally, YouTube reserves the right to hide ads on videos and watch pages — including ads from certain advertisers or certain formats.

When a person builds especially serious or repeated violations, YouTube might choose to disable ads from the whole channel or suspend the person from its Partner program, which is aimed at channels that reached 4,000 watch hours in 12 months and 1,000 subscribers, and lets the creators make money from a special tier of ads and via the YouTube Red subscription service.( This is essentially where Paul has fallen today .)

Since YouTube is wary of getting into the censorship game, it’s leaving an exit road open to people who choose to post controversial things anyway. Posters can turn off ads on individual videos. From what we understand, Paul’s channel and videos will get reevaluated in coming weeks to see if they meet guidelines.

It’s not clear at all how much Paul has made from his YouTube videos. One calculate sets his YouTube ad revenue at between $ 40,000 and $630,000 per month, while another puts it at $270,000 per month( or around $3.25 million/ year ). To note, he’d already been removed from the Preferred program and the Originals program, so that would have already dented his YouTube income.

And you have to ask whether suspending ads genuinely fixes “the worlds biggest” content issues on the platform. While an advertising suspension might entail a loss of some revenue for the inventor, it’s not really a perfect solution.

Logan Paul, as one example, continues to push his own merchandise in his videos, and as a high-profile figure who has not lost his whole fan base, he will still get millions of views( and maybe more now because of this ). In other terms, the originally contravening content( and a viable business model) is still out there, even if it doesn’t have a YouTube monetizing element attributed to it.

On the other hand, SocialBlade, one of the services analytics on YouTube inventors , notes that Paul’s opinions have dropped 41 percent, and subscribers are down 29 percentage in the last month, so maybe there is a god.

Make sure to visit: CapGeneration.com

Facebook lets you tip game live streamers $3+

Facebook Live is launching monetization for video gameplay streamers, allowing users to tip inventors a minimum of$ 3 via the desktop site. Right now, the contributor of the tips-off doesn’t get any special call-out or privileges, though Facebook tells me it’s considering different options for creators and gamers. For instance, it could have a special emoji Reaction float across the creek as a route to thank the fan who dedicated money.

The amount Facebook will keep from these tips that it calls “fan support” isn’t clear yet, but the company tells me that it’s safe to assume there will be a revenue share. Apparently it’s too early to lock any percentage in, though Facebook has taken a 30 percent cut from game developers in the past, and currently takes a 45 percentage share of ad revenue from people who place ad breakings in the videos, so it could be in that ballpark.

Logan Paul returns to YouTube with a video about suicide prevention

Logan Paul, the YouTuber who sparked a public backlash three weeks ago after posting a video in Japan’s “suicide forest, ” has returned to the platform. In his first post back, he published a video focused on suicide and self-harm prevention.

Paul first observed himself in difficulty when, in early January, he posted a video titled “We detected a dead body in the Japanese Suicide Forest.” The video, which proved a suicide victim with merely the face blurred out, was viewed more than six million times before it was taken down. In it, Paul jokes about the body and seemingly approaches the subject of suicide with an unacceptable levity.

The internet’s reaction was fiercely negative.

Paul apologized, saying he got caught up in the moment and meant to raise awareness for suicide and suicide prevention.

He also said he’d be taking a break from YouTube to reflect.

In the working day that followed, YouTube removed Paul from the Google Preferred ad program and suspended work on his originals. YouTube is also to blame for the controversy, as the platform allowed for the video to go live and sit on the internet after it passed through YouTube’s moderation screening.

In the new video, Paul interviews a survivor of a suicide try, as well as the director of the National Suicide Prevention Lifeline, and pledges to donate$ 1 million to suicide prevention.

You can check out the video yourself below.

Make sure to visit: CapGeneration.com

YouTube tightens the rules around creator monetization and partnerships

In an effort to regain advertisers’ trust, Google is announcing what it says are “tough but necessary” a modification to YouTube monetization.

For one thing, it’s setting a higher bar for the YouTube Partner Program, which is what allows publishers to make money through advertising. Previously, they needed 10,000 total views to join the program. Starting today, channels also need to have 1,000 subscribers and 4,000 hours of view time in the past year.( For now, those are just requirements to join the program, but Google says it will also start applying them to current partners on February 20.)

This might assure marketers that their ads are less likely to run on random, fly-by-night channels, but as Google’s Paul Muret writes, “Of course, size alone is not enough to determine whether a channel is suitable for advertising.”

So in addition, he said 😛 TAGEND

We will closely monitor signals like community strikes, spam, and other abuse flags to ensure they comply with our policies. Both new and existing YPP channels will be automatically assessed under this strict criteria and if we find a channel repeatedly or egregiously violates our community guidelines, we will remove that channel from YPP. As always, if the account has been issued three community guidelines strikes, we will remove that user’s accounts and channels from YouTube.

Muret also described changes planned for the more exclusive Google Preferred program, which is supposed to be limited to the best and most popular content. Vlogger Logan Paul was part of Google Preferred until the controversy over his “suicide forest” video get him kicked out last week — a story that suggests some of the limitations to Google’s approach.

Moving forward, Muret said the program will offer “not only … the most popular content on YouTube, but also the most vetted.” That means everything in Google Preferred should be manually curated, with ads only operating “on videos that have been verified to gratify our ad-friendly guidelines.”( Looks like all those new content moderators will be busy .)

Lastly, Muret said YouTube will be introducing a new “three-tier suitability system” in the next few months, is targeted at giving marketers more control over the trade-off between running ads in safer environments versus reaching more viewers.

Make sure to visit: CapGeneration.com

YouTube drops Logan Paul from Google Preferred and puts his Originals on hold

YouTube has taken further action against social media superstar Logan Paul, falling the vlogger from its Google Preferred program, which is meant to be a mark of trust to signal to advertisers they can rely on these media inventors to make higher-quality content.

After Paul posted a video of a dead body he filmed hanging from a tree in Japan’s colloquially titled “suicide forest, ” it’s no amaze that YouTube and Google would want him out of its Preferred program. Paul isn’t cut off from all ad benefits on YouTube, however, and can still use the YouTube Partner Program to monetize videos.

The consequences of Paul’s grievous mistake in judgement don’t objective there, however: the YouTuber won’t be featured in the fourth season of the YouTube Red scripted original “Foursome, ” the company said, and any of his other upcoming Originals projects are on hold for the time being, with their ultimate fate still to be determined.

YouTube’s prior action against Paul following his sin include receiving a strike for his violation of its posted community guidelines, as well as releasing a statement about how its decision to pull the video was in keeping with its policies.

Paul announced following the controversy that he was taking some time away from his practice of posting daily vlogs, and his last video on YouTube was his apology post to spectators, which was published a week ago.

Make sure to visit: CapGeneration.com

Amazon updates Fire TVs YouTube app to redirect users to a web browser instead

The feud between Amazon and Google continues today with the early removal of YouTube from the Fire TV- a move Google had said wouldn’t take place until January 1, 2018. But as a number of Fire TV owners have now noticed, launching the YouTube app today informs you that you can choose to watch “YouTube and millions of other websites” by means of a web browser. You then have the option to choose from Amazon’s own Silk browser or Firefox, with a click of a button.

The disagreements between the companies that led to this consumer-unfriendly stance go back several years.

Google hasn’t been so pleased to see you both Amazon’s anti-competitive nature when it comes to allowing contenders to sell their own hardware items- like smart speakers and media players- on Amazon.com. The retailer have all along rejects to stock devices that competed with its own- like Apple TV, Chromecast, Google Home, and others- in an effort to promote Amazon products like Echo speakers and Fire TV.

However, Amazon and Apple lately negotiated an agreement that brought the Apple TV back to Amazon , and Amazon’s Prime Video app to Apple TV.

Meanwhile, it seemed debates between Amazon and Google were improving earlier this month when the Chromecast and Chromecast Ultra reappeared on Amazon.com.( They’re still proving as “currently unavailable, ” however .)

The other issue at hand was that Amazon had launched its own version of Google’s YouTube player for its Echo Show device, without working with Google to ensure core features were accessible. There’s been quite a bit of backward and forward on this matter, with Google pulling that player, merely to have Amazon surreptitiously work around the block by implementing a web version of YouTube instead.

That led Google this month to declare that it would pull YouTube entirely from Amazon’s hardware lineup, including Fire TV. The player was yanked immediately from Echo Show, but Fire TV owners were told that the app are now working until January 1, 2018.

It would be unusual for Google to actually pull the YouTube app ahead of its deadline, which indicates this change- to point YouTube users to web browsers instead- may have come from Amazon’s side.

That theory is farther backed up by the fact that sideloading the YouTube app onto Fire TV will continue to display the “warning” message, according to a report from AFTVNews.com and commenters on Reddit.

Above: Warning screen; Image credit- AFTVNews.com

However, it’s unclear if Amazon’s choice to redirect YouTube users to web browsers ahead of schedule is fully rolled out. One report from Cord Cutter News said you’ll only ensure the browser option screen if you have a browser installed on your Fire TV, for example.( Update: a decompilation of the app’s code, though, indicates the app has been changed to only point to the web browser- no matter if you have one installed or not .)

On two Fire TV devices we have here( a prior generation and new generation player ), we’re merely seeing the browser option screen as of today. And many users worldwide are reporting the same, per Twitter.

We’ve reached out to Amazon and Google for remark. Google has not reacted, but a rep from Amazon offered the following statement 😛 TAGEND

“I can confirm that YouTube and millions of other websites are accessible by using a web browser like Firefox or Silk on Fire TV.”

With all this drama, is it any wonder that Roku is the top streaming device in the U.S .?

Consumers don’t want to be jerked around like this all because two challengers can’t work out a reasonable solution that serves both their interests. At the end of the working day, Amazon and Google only hurting themselves by alienating their overlapping customer base- a group that easily could( and likely should) switch to Roku at this point.

Make sure to visit: CapGeneration.com

YouTube is launching its own take on Stories with a new video format called Reels

Even YouTube is adding Stories. The popular format introduced by Snapchat, then adopted by Instagram, Skype, Facebook, Messenger and even some dating apps, is now stimulating its route to YouTube as a new feature the company is calling “Reels.” To be clear, Reels is YouTube’s spin on Stories , not an exact copy. And Reels won’t live at the top of the app, as Narratives do on Instagram- instead, they’ll appear in a brand-new tab on a creator’s channel.

The launch of the Reels beta was mentioned briefly in an announcement today about the expansion of YouTube Community tab to all creators with over 10,000 subscribers.

We asked YouTube for more details on Reels, which will soon be introduced into beta for a handful of creators for feedback and farther testing.

The company tells us the idea with Reels is to introduce a new video format on YouTube that lets creators express themselves and engage fans without having to post a full video.

Instead, creators stimulate new Reels by shooting a few quick mobile videos of up to 30 seconds each, then adding filters, music, text and more, including new “YouTube-y” stickers.

And unlike Narrative on other platforms, YouTube inventors can attain multiple Reels and they won’t expire.

Below is what Reels will look like for creators at launching, but be aware that the format could change ahead of a public release.

For video spectators, Reels may not mar the experience the route the addition of Stories did on Messenger or Facebook, where they weren’t as welcome.

Since Reels are posted to a separate tab on the creator’s channel, similar to Community itself, spectators could choose to go watch these new videos or not.

But if users engage with Reels, then YouTube will take that as a signal that you’d like to see them more often. That could trigger their appearance on the viewer’s YouTube home page as recommendations, YouTube tells us.

The arrival of Reels is one of a handful of changes for YouTube and YouTube Community, the social platform launched last autumn as a new route for video inventors to engage their fan base. A mini social network within YouTube’s larger social network, Community lives on a creator’s channel in its own tab, allowing them to share updates using text, photos, GIFs, polls, and more.

The audience can then thumbs up or down the content, as they do videos, and comment on the posts.

Also new to Community is a change to how posts work and are displayed to viewers. Now, a creator’s most engaged viewers will see Community posts in their Home feed upon YouTube, even if they’re not subscribed to the channel.

YouTube says notifications are also now optimized so fans aren’t spammed with every new Community post.

Community was initially launched into beta with only a handful of YouTube creators, including John& Hank Green, AsapSCIENCE, The Game Theorists, Karmin, The Key of Awesome, The Kloons, Lilly Singh, Peter Hollens, Rosianna Halse Rojas, Sam Tsui, Threadbanger, and Vsauce3.

Today, YouTube detailed how some of its testers have been using Community so far. For instance, Grav3yardgirl use Community to ask fans to pick what to unbox next; Lele Pons posted GIFs that serve as a trailers for her upcoming videos; Kevin Durant shares photos on NBA gameday. And some have utilized it send traffic to different channels, and other purposes.

YouTube did not say when Reels will arrive in beta, how long until it’s publicly available, or which creators will receive the format first.

Make sure to visit: CapGeneration.com

I watched 1,000 hours of YouTube Kids content and this is what happened

<input type="hidden" name="fallback" value="

This” embed is invalid

” />

The multicolored slurry of user produced content that for years has been successfully netting millions of kids’ eyeballs on YouTube by remixing popular cartoon characters to crudely act out keyword search scenarios leered into wider public view this week, after novelist James Bridle penned a scathing Medium post arguing the content represents “a kind of violence inherent in the combination of digital systems and capitalist incentives”.

What do you get if you endlessly recombine Spiderman and the Joker with Elsa from Frozen and lashes of product placement for junk food brands like McDonalds?

A lot of views on YouTube, clearly. And thus a very modern kind of children’s’ entertainment’ that can clearly only exist on a vast, quality-uncontrolled, essentially unregulated, algorithmically incentivized ad platform with a very low roadblock to entry for content inventors, which judges the resulting UGC purely on whether it can lift itself out of the infinite supplying of visual soup by getting positions — and do so by being expert at pandering to populist childish cravings, the keyword search criteria that best express them and the algorithm that automatically rank the content.

This is effectively — if not yet literally — media programming by SEO-optimized robots.

And, as Marshall McLuhan wrote in 1964, the medium is the message.

… because it is the medium that shapes and controls the scale and form of human association and action. The content or uses of such media are as diverse as they are ineffectual in shaping the form of human association. Indeed, it is only too typical that the “content” of any medium blinds us to the character of the medium. It is merely today that industries have become aware of the various kinds of business in which they are engaged. When IBM discovered that it was not in the business of making office equipment or business machines, but that it was in the business of processing info, then it began to navigate with clear vision.

Insofar as children are concerned, the message being produced via YouTube’s medium is often nonsensical; a mindless and lurid slurry of endlessly repurposed permutations of stolen branded content, played out against an eerie combination of childish tunes, giddily repeating nursery rhymes, and petroleum cartoon voiced effects.

It’s a literal pantomime of the stuff children might think to search for. And it speaks volumes about the dysfunctional incentives that define the medium.

After the latest outcry about disturbing UGC intentionally targeting children on YouTube, Google has said it will implement new policies to age-restrict this type of content to try to prevent it aiming up in the YouTube Kids app, though a prior policy forbidding “inappropriate use of family characters” clearly hasn’t stemmed the low-brow flowing of pop-culture soup.

The maniacal laugh that appears to be the signature trope of this’ genre’ at the least seems appropriate.

McLuhan’s point was that content is intrinsically shaped by the medium through which we obtain it. And that it’s mediums themselves which have the power to enact structural change by reconfiguring how humans act and associate en masse.

The mindless cartoon noise mesmerize children on YouTube might be the best visual example of that argument yet. Even if McLuhan thought analyzing content itself would merely confuse from appropriate critical analysis of mediums.

All you have to do is imagine the unseen other half of these transactions: Aka all those unmoving toddlers staring into screens as they devour hours and hours of junk soup.

The thriving existence of such awful stuff, devised with the sole intent of producing high volumes of ad revenue by being structured so as to be likely to be surfaced via search and recommendation algorithm, is also a perfect example of how the content humen can be most easily persuaded to eat( aka clickbait) and the stuff that might be most intellectually profitable in order to be allowed to ingests are two very different things.

Algorithmically coordinated mega platforms like YouTube may host quality content but are expert at incentivizing the creation and intake of clickbait — thanks to ad-targeting business models that are fed by recommendation systems which monitor user inputs and actions to identify the most clickable and thus most addictive stuff to hold feeding them.

( This is not just a number of problems with kid-targeting content, of course. On the same dysfunctional theme, see also how quickly disinformation spreads between adults on Facebook, another ad-funded, algorithmically coordinated mega platform whose priorities for content are that it be viral as often as possible .)

Where kids are concerned, the structure of the YouTube medium demonstrably rewards pandering to the most calorific of visual cravings.( Another hugely popular kids’ content format regularly racking up millions and millions of views on YouTube are toy unboxing videos, for example .) Thereby edging out other, more thoughtful content — given viewing time is finite.

Sure , not all the content that’s fishing for children’s eyeballs on YouTube is so cynically constructed as to simply consist of keyword search soup. Or purely involve visuals of dolls they might crave and pester their parents to buy.

Some of this stuff, while barely original or sophisticated, can at least involve plot and narrative elements( albeit frequently involving gross-out/ toilet humor — so it’s also the sort of stuff you might prefer your children didn’t spend hours watching ).

And sure there have been moral panics in the past about kids watching hours and hours of Tv. There are in fact very often moral panics associated with new technologies.

Which is to be expected as media/ media are capable of reconfiguring societies at scale. Yet also often do so without adequate attention being paid to the underlying technology that’s causing structural change.

Here at least the problems of the content have been linked to the incentive-structures of the distribute platform — even if wider questions are getting less scrutiny; like what it means for society to be collectively captivated by a free and limitless furnish of visual mass media whose content is shaped by algorithm intent only on maximizing economic returns?

Perhaps the penny was beginning to drop in the political realm at the least.

While kids’ Tv content could( and can) be plenty mediocre, you’d be hard pressed to find so many examples of programming as literally mindless as the stuff being produced at scale for kids to eat on YouTube — because the YouTube medium incentivizes content mills to produce click fodder to both drive ad revenue and edge out other content by successfully capturing the attention of the platform’s recommendation algorithm to stand a chance of get opinions in the first place.

This dismal content is also a great illustration of the digital axiom that if it’s free you’re the product.( Or instead, in this case, your kid’s eyeballs are — creating topics over whether lots of time spent by kids viewing clickbait might not be to the harm of their intellectual and social development; even if you don’t agree with Bridle’s more pointed assertion that some of this content is so bad as to be being intentionally designed to traumatize children and so, once again looping in the medium, that it represents a systematic kind of child abuse .)

The worst examples of the regurgitated pop culture slurry that exists on YouTube can’t claim to have even a basically coherent narrative. Many videos are just a series of repetitive graphical scenarios designed to combine the culled characters in a mindless set of keyword searchable actions and reactions. Fight scenes. Driving scenes. Junk food transaction scenes. And so it goes mindlessly on.

Some even self-badge as “educational” content — because in the middle of a 30 minute video, say, they might showing the word “red” next to a red-colored McDonald’s Big Mac or a Chupa Chups lollipop; and then the word “blue” next to a blue-colored Big Mac or a Chupa Chups lollipop; and then the word “yellow” … and so on ad nauseam.

If there’s truly even a mote of educational value there it must be weighed against the obvious negative of repetitious product placement simultaneously and directly promoting junk food to kids.

Of course this stuff can’t hold a candle to original kids’ comics and cartoon series — say, a classic like Hanna-Barbera’s Wacky Races — which generations of children past consumed in place of freebie content on YouTube because, well, ad-funded, self-sorting, free-to-access digital technology platforms didn’t exist then.

Parents may have detested on that content too at the time — blaming cartoons as frivolous and time-wasting. But at least such series were entertaining children with well developed, original characters engaged in comic subplots sitting within coherent, creative overarching narrations. Children were learning about proper story structure, at very least.

We can’t predict what wider impact a medium that incentivizes factory line production of mindless visual slurry for kids’ intake might have on children’s growth and on society as a whole. But it’s hard to imagine anything positive coming from something so intentionally base and bottom-feeding being systematically thrust in front of kids’ eyeballs.

And given the content genuinely has such an empty message to lend it seems logical to read that as a warn about the incentive structures of the underlying medium, as Bridle does.

In truth, I did not watch 1,000 hours of YouTube Kids’ content. Ten minutes of this awful stuff was more than enough to give me nightmares.

Make sure to visit: CapGeneration.com

How algorithms are pushing the tech giants into the danger zone

The algorithms Facebook and other tech companies use to boost engagement and increase profits have led to spectacular failings of sensitivity and worse

Facebook
Facebook is asking users to send them their nude photographs in a project to combat’ revenge porn ‘. Photo: Alamy Stock

Earlier this month, Facebook announced a new pilot programme in Australia aimed at stopping “revenge porn”- the non-consensual sharing of nude or otherwise explicit photos- on its platform. Their answer? Just send Facebook your nudes.

Yes, that’s right: if you’re worried about person spreading explicit images of you on Facebook, you’re supposed to send those images to Facebook yourself.

If this sounds to you like some kind of sick gag, you’re not alone. Fairly much everyone I talked to about it did a spit-take at the entire premise. But in addition to being ridiculous, it’s a perfect example of the way today’s tech companies are in over their heads, attempting to engineer their way out of complex social problems- without ever questioning whether their very business models have, in fact, created those problems.

To consider what I entail, let’s look at how Facebook’s new scheme is meant to work: if you’re concerned about revenge porn, you complete an online form with the Australia eSafety Commissioner’s office. That office then apprise Facebook that you submitted a request. From there, you send the image in question to yourself using Facebook Messenger. A team at Facebook retrieves your image, reviews it, then generates a numerical fingerprint of it known as a “hash”. Facebook then stores your photo’s hash, but not the photo itself, and apprises you to delete your photo from Messenger. After you’ve done so, Facebook says it will also delete the photo from its servers. Then, whenever a user uploads a photo to the platform, an algorithm checks the photo against the database. If the algorithm finds that the photo matches one reported as revenge porn, the user will not be allowed to post it.

Mark
Mark Zuckerberg talks about protecting the integrity of the democratic process, after a Russia-based group paid for political ads on Facebook during the US election. Photo: Facebook via YouTube

Just guess for a moment about all the ways this could go wildly wrong. First off, to stimulate the system work at all, you must not only have digital copies of all the images that might be spread, but also be comfortable with a bunch of strangers at Facebook poring over them, and trust that the hashing system will actually catch future attempts to upload the image. And that’s assuming everything works as planned: that you won’t screw up the upload process and accidentally send them to someone else, that Facebook staff won’t misuse your photos, that your Messenger account won’t be hacked, that Facebook will actually delete the image from its servers when it says it will. In short, you have to trust that Facebook’s design, backend databases and employees are all capable of seamlessly managing exceedingly personal information. If any one of those things doesn’t work perfectly right, the user is at risk of shame- or worse.

Given Facebook’s track record of dealing with sensitive subjects, that’s not a risk any of us should take. After all, this is the company that let Russian-backed organisations buy ads intended to undermine democracy during the 2016 election( ads which, the company now admits, millions of people watched ). This is the company that built an ad-targeting platform that allowed advertisers to target people employing antisemitic audience categories, including” Jew haters” and” How to burn Jews “. And this is a company that scooped up a screenshot of a graphic rape threat a journalist had received and posted on her Instagram account, and turned it into a peppy ad for Instagram( which it owns) that was then inserted on her friend’s Facebook pages.

And that’s just from the past few months. Seeming further back, we can find lots more distressing stories- like the time in 2012, when Facebook outed two lesbian students from the University of Texas, Bobbi Duncan and Taylor McCormick. The students had use Facebook’s privacy sets to conceal their orientation from their families, but Facebook posted updated information to their profiles saying they had joined the Queer Chorus.

Or how about the launch in 2014 of Facebook’s Year In Review feature, which collected your most popular content from the year and packaged it up for you to relive? My friend Eric Meyer had been avoiding that feature, but Facebook made one for him anyway, and inserted it into his feed. On its cover was the face of his six-year-old daughter, Rebecca, flanked by illustrations of balloons, streamers and people dancing at a party. Rebecca had died earlier that year. But Facebook’s algorithm didn’t know whether that was a good or bad image to surface. It only knew it was popular.

Since Year In Review, Facebook has amped up this algorithmically-generated type of celebratory reminders. Now there’s On This Day, which, despite telling Facebook I don’t want to see these posts, still pops into my feed at least once a week. There’s also Friends Day, a fake holiday where Facebook sends algorithmically generated photo montages of users with their friends- resulting in one man receiving a video set to jazzy music showcasing his auto accident and subsequent journey to the hospital.

But Facebook keeps inserting messages and designs into users’ memories. Just last week, my sister-in-law received a notification covered in balloons and thumbs-up signs telling her how many people have liked her posts. The image they demonstrated with it? A picture of her broken foot in a cast. I can assure you, she didn’t feel especially thumbs-up about falling down a flight of stairs.

Peppa
Peppa and George about to be cooked by a witch in a YouTube spin-off of the Peppa Pig series. Source: YouTube

What all these failures have in common is that they didn’t have to happen. They only occur because Facebook expends far more time and energy in constructing algorithmically controlled features “ve been meaning to” drive user involvement, or devote more control to advertisers, than it does thinking about the social and cultural implications of making it easy for 2 billion people to share content.

It’s not just Facebook that’s turned to algorithms to bump up engagement over the past few years, of course – it’s most of the tech industry, especially the proportions reliant on ad revenue. Earlier this month, writer James Bridle published an in-depth look at the underbelly of creepy, violent content targeted at kids on YouTube– from knock-off Peppa Pig cartoons, such as one where a trip-up to the dentist morphs into a graphic torment scene, to live-action “gross-out” videos, which show real children vomiting and in pain.

These videos are being produced and added to YouTube by the thousand, then tagged with what Bridle calls” keyword salad”- long lists of popular search words packed into their titles. These keywords are designed to game or manipulate the algorithm that sorts, ranks and selects content for users to watch. And thanks to a business model aimed at maximising positions( and therefore ad revenue ), these videos are being auto-played and promoted to kids based on their “similarity”- at least in terms of keywords employed- to content that the kids have already seen. That means a child might start out watching a normal Peppa Pig episode on the official channel, finish it, then be automatically immersed in a darknes, violent and unauthorised episode- without their mother realising it.

YouTube’s response to the problem has been to hand responsibility to its users, asking them to flag videos as inappropriate. From there, the videos go to a review team that YouTube says comprises thousands of people running 24 hours a day to review content. If the content is found to be inappropriate for children, it will be age-restricted and not is displayed in the YouTube Kids app. It will still appear on YouTube proper, however, where, officially, users must be at least 13 years old, but in reality, is still a system which countless children employ( just think about how often antsy kids are handed a phone or tablet to keep them occupied in a public space ).

Like Facebook’s scheme, this approach has several flaws: since it’s trying to ferret out inappropriate videos from children’ content, it’s likely that most of the people who will encounter these videos are kids themselves. I don’t expect a lot of six-year-olds to become aggressive content moderators any time soon. And if the content is flagged, it still needs to be reviewed by humen, which, as YouTube has already acknowledged, takes “round the clock” monitoring.

When we talk about this kind of challenge, the tech companies’ response is often that it’s simply the inevitability of scale- there’s no way to serve billions of users endless streams of engaging content without getting it incorrect or allowing abuse to slip by some of the time. But of course, these companies don’t have to do any of this. Auto-playing an endless river of algorithmically selected videos to kids isn’t some sort of mandate. The internet didn’t have to become a smorgasbord of” suggested content “. It’s a option that YouTube made, because ad views are ad opinions. You’ve got to break a few eggs to make an omelette, and you’ve got to traumatise a few kids to build a global behemoth worth $600 bn.

And that’s the questions: in their unblinking pursuing of growth over the past decade, these companies have constructed their platforms around features that aren’t just vulnerable to abuse, but literally optimised for it. Take a system that’s easy to game, profitable to misuse, intertwined with our vulnerable people and our most intimate moments, and operating at a scale that’s impossible to control or even monitor, and this is what you get.

The question now is, when will we force tech companies to reckon with what they’ve worked? We’ve long decided that we won’t let companies sell cigarettes to children or set asbestos into their house materials. If we want, we can decide that there are limits to what tech can do to “engage” us, too, rather than watching these platforms spin further and further away from the utopian dreams they were sold to us on.

Technically Wrong: Sexist Apps, Biased Algorithms and Other Threats of Toxic Tech by Sara Wachter-Boettcher is published by Norton. To order a transcript for PS20 going to see bookshop.theguardian.com or call 0330 333 6846. Free UK p& p over PS10, online orders merely. Phone orders min p& p of PS1. 99

Make sure to visit: CapGeneration.com

Judge sides with YouTubers Ethan and Hila Klein in copyright lawsuit

Ethan and Hila Klein, the husband-and-wife squad behind the popular H3H3 YouTube channel, appear to have won their legal combat against Matt Hosseinzadeh, a.k.a. Matt Hoss. A New York judge today issued a summary judgement in favor of the Kleins.

Hosseinzadeh sued the Kleins last year after they posted a reaction video mocking him. Hosseinzadeh’s initial suit focused less on the criticism per se, and instead alleged that the Kleins had infringed his copyright by featuring clips of one of his videos in their criticism.

The Kleins defended their employ of the footage as fair use, and the other YouTube creator, Philip DeFranco, raised more than $170,000 for their legal defense — DeFranco wrote, “If they are bullied and drained of monies because of this ridiculous lawsuit and/ or they lose this case it is unable to define a terrible precedent for other creators.”

Ethan Klein offered a similar sentiment in a tweet today describing the outcome as a “huge victory for fair utilize on YouTube.”

The basic legal issue was whether the Kleins’ creation of a reaction video that included clips of Hosseinzadeh’s content counts as fair employ. In this case, Judge Katherine B. Forrest ruled that it does.

“The key evidence in the record consists of the Klein and Hoss videos themselves, ” Forrest writes. “Any review of the Klein video leaves no doubt that it constitutes critical commentary of the Hoss video; there is also no doubt that the Klein video is decidedly not a market substitute for the Hoss video. For these and the other reasons set forth below, defendants’ utilize of clips from the Hoss video constitutes fair utilize as a matter of law.”

However, Forrest emphasizes that this isn’t meant to be a blanket defense for all reaction videos. She notes that while some of these videos mix commentary with clips of someone else’s run, “others are more akin to a group viewing session without commentary.”

“Accordingly, the Court is not ruling here that all’ reaction videos’ constitute carnival use, ” she says.

Forrest also sides with the Kleins on Hosseinzadeh’s subsequent assert that they maligned him in a video about the lawsuit — she writes, “It is clear that defendants’ remarks regarding the lawsuit are either non-actionable sentiments or substantially true as a matter of law.”

We’ve reached out to Hosseinzadeh and the Kleins for comment and will update if we hear back. You can also read the judge’s full decision here.

Make sure to visit: CapGeneration.com