With the threat of regulation looming, Google doubles down on its fight against false news - Action News
Home WebMail Friday, November 22, 2024, 07:06 PM | Calgary | -11.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Science

With the threat of regulation looming, Google doubles down on its fight against false news

With a $300-million investment in journalism-related initiatives, the tech giant wants to show it's taking the spread of disinformation, the manipulation of its platform, and threats to democracy seriously.

In response to recent democratic threats, the tech giant is investing $300M in a journalism initiative

In a presentation for publishers in New York, Google's chief business officer Philipp Schindler acknowledged they 'haven't always gotten it right.' But he stressed the company's continuing commitment to helping journalism thrive. (Spencer Platt/Getty Images)

Amid growing concern that some of the world's largest and most influential tech companies are failing to adequately protect users from the misuse of their platforms, Google is doubling down on its own efforts to curb the spread of misinformation and false news.

The company said on Tuesday that it was introducing a slew of new media-focused products and initiatives called the Google News Initiative and committing to a $300-million US investment over the next three years. Those efforts include the creation of a new academic lab to study and counter the spread of disinformation, a new subscription revenue tool for publishers, and funding for new media literacy programs worldwide.

The initiative is Google's latest effort to convince news publishers to see the tech giant as a partner, and not a competitor, despite the fact that the overwhelming majority of digital advertising revenue goes to Google and Facebook each year.

But more urgently, with lawmakers raising the spectre of regulating Big Tech, Google wants to prove it is taking the spread of disinformation, the manipulationof its platform, and threats to democracy seriously andsupporting and elevating authoritative, higher quality content plays a crucial part.

In a presentation for publishers at its New York office, Google's chief business officer Philipp Schindler acknowledged they "haven't always gotten it right." But he stressed the company's continuing commitment to helping journalism thrive.

Who's responsible for handling false news?

Both Google and Facebook have faced fierce questioning from lawmakers for their respective platforms' roles in enabling malicious actors to spread false news, disinformation, and divisive political ads in the run-up to the 2016 U.S. presidential election and beyond.

This weekend, Facebook once again drew lawmakers' ire, after it was revealed that data analytics firm Cambridge Analyticahad obtained detailed profile data harvested from 50 million users in most cases without their explicit consent.

The episode has led to renewed calls that tech companies be subject to greater political oversight or control an outcome that Facebook and Google would desperately like to avoid.

In response, Facebook has gone to great lengths over the past year to prove the company can be a force for social good. It has announced partnerships with fact-checking organizations, supported news literacy initiatives, introduced new reporting and publishing tools for journalists, and increased transparency around political ads.

U.S. Senate Intelligence Committee Vice Chair Mark Warner has been particularly vocal about the need for tech companies to do more to combat misinformation on their platforms, and has introduced legislation to regulate online political ads. (Carlos Barria/Reuters)

Most recently, Facebook made dramatic changes to its News Feed, prioritizing posts from friends and family over those from news organizations and brands.

Google's initiatives are similar in part, effortsto mollify critics who have argued that tech companieshaven't done enough to get the misuse of their platforms under control.

Yet, those efforts have mostly shifted the responsibility for identifying and debunkingfalse news and misinformation awayfrom tech companies to third party fact-checkers, newsroomsand media literacy groups as opposed to addressing the underlying factors that incentivize its spread.

Rather than make sweeping changes to the way YouTube surfaces or recommends videos, for example, Google recently announced it would display factual information from Wikipedia below contested videos instead, and highlight verified videos in a "Top News" shelf.

To do anything more drastic would require a fundamental reconfiguring of the way their services are designed optimized first and foremostto serve highly targeted content and advertising, Facebook and Google's primary source of revenue. Neither arelikely to change thatanytime soon.

Grants for research and education

The Google News Initiative builds on previous Google efforts, including a tool that uses artificial intelligence to identify hateful or offensive comments, and a technology called AMP that makes stories load faster on mobile devices.

As part of its announcement, Google said that its philanthropic arm, Google.org, would distribute $10 million in grant money to organizations worldwide working on media literacy programs. To start, $3 million will be given to a U.S. media literacy initiative aimed at students called MediaWise, which is modelled after an initiative called NewsWise that was piloted in Canada last autumn.

Content developed by NewsWise is estimated to reach more than one million students, starting with Ontario classrooms in the spring, and nationwide next year ahead of the 2019 federal election.

Google is also launching an academic lab with the Harvard University research group First Draft. The Disinfo Lab, as it is called, would use a mix of computational tools and human expertise to identify trending dis- and misinformation specifically during elections and moments of breaking news and share those insights with newsroom partners.

It is being modelled after prior work done during the French election.

The company also said it would put some of its investment towardhiringa new team of machine-learning engineers that would be dedicated to building and scaling more tools that meet publishers' needs from the creation of compelling new story formats to new ways of collecting revenue.

One of those tools would help journalists identify cutting edge fake audio or video clips what Google calls synthetic media that some predict will soon be used in attempts to fool the media. Training data will be made available to researchers and journalists so they can develop their own detection tools, too.