Skip to main content

Artificial intelligence is about to transform science journalism – here’s how to prepare for it.

What will the artificial intelligence revolution mean for science journalism?
Most people in the field don’t seem to have given it much thought, which is scary because so much of journalism will inevitably be changed by AI – and is already being changed. Some people fear loss of jobs in the long term, but few have thought about the opportunities it provides in the short term.


AI will be the catalyst of the third disruption in journalism, potentially changing the way we produce and consume news, argues Bertrand Pecquerie, CEO of the Global Editors Network. (The first two disruptions were the Internet and smartphones). “Ignoring the development of new technologies is not the solution,” writes Pecquerie.

Indeed, many senior science journalists with 20 or 30 years of experience wouldn’t be around if they hadn’t embraced past innovations in the newsrooms, be it the transition from typewriter to keyboard, from phone to e-mail, from print to online, or from words to multimedia. As one of them once told me: what we do is tell stories, that’s the important part of our jobs, not what medium we use.

Now we’re facing a world where computers can tell those stories without us. We’ve already got news-writing bots, able to produce more stories than humans do, and quicker. Take for example the LA Times’ Quakebot or Washington Post’s Heliograph. It always strikes me how many science journalists and even students of journalism I talk to are unaware of this fact. It’s a fairly recent innovation, and likely one that will sweep into newsrooms and take over much of the writing – and other aspects of media work – in the next few years.

And unlike other recent disruptions which heralded bright young social-media savvy Internet-natives, who the older generation perceived as taking their jobs – the AI developments are so new and often based on highly technical science that younger journalists are in no better position than the rest of us to take advantage of them. Indeed, entry-level jobs with more mundane and formulaic tasks, such as short news writing or fact checking, will probably be the first to be affected by AI. (Furthermore, as a mobile journalism trainer from a major school in the US told a class recently, young people are consumers of a lot of mobile technology but are not necessarily using it to produce content – they can be as illiterate as anyone else about using the latest tech to produce journalism).

Given the high costs of hiring developers, these writing bots might not yet be cheaper than human reporters, but that will change soon. In fact, big media companies that have AI technology are now leasing it to dozens of smaller media outlets, which offsets their initial development costs. This also potentially creates a new world of have and have-nots in the media space, where smaller media just won’t be able to afford their own in-house AI know-how and will depend on second-hand, trickle-down products sold or leased to them by the big firms. Open-source code posted on sites such as GitHub, offer some promise (though you still need to know how to read it and use it), but this dichotomy is likely to be an issue to watch for.

If you think science is still too complex for AI bots to tackle, take a look at SciNote’s AI Manuscript Writer: it writes its own research papers. It’s only a short step from that to AI writing press releases and short news stories based on papers it’s written.

AI editors might not be far off, either. Take this example published recently in Wired about an algorithm that acts as an editor for science fiction stories. “It’s commissioning a story with guidelines and then forcing me to write it the way it wants,” writes Stephen Marche, the author of the piece. “If I don’t do it right, the algorithm makes me do it again, and again, until I get it right.” An AI bot that can make anyone write in the style of your favourite science writer could be nigh. What will that mean for our industry and people who have a specific style of writing?

Also, reporters and editors are not the only ones in science news rooms that are facing AI competition. Many digital marketing staff these days spend lots of time posting content on social media, trying to drive traffic to their sites and get more engagement for their content. AI can analyse huge amounts of historical and current data to make much faster and more efficient decisions on what to post, when and where. Cue, for example, Echobox, a social media platform for publishers based on artificial intelligence that promises to increase reach and save time.

It’s important to keep science reporting accurate, and to do that, many media still employ sub-editors or fact-checkers, as a second line of defense if the reporter and editor get anything wrong.  Again, a recent start-up from the UK, called Factmata is on the case. They’ve just launched their beta product to create a fact-checking community, leveraged by artificial intelligence.  It’s not hard to imagine how similar software could eventually be doing the work of human fact-checkers. Indeed, Full Fact has already developed and is using an automated fact-checking platform.

Or take video platforms such as Wibbitz or Wochit, that use AI to help journalists create videos about their work, turning anyone into a multimedia reporter – and potentially making video editors redundant. Writers, editors, fact-checkers, digital marketing experts, multimedia editors: all of their jobs are becoming automated by AI advances. How much longer will publishers keep them on staff given that there are cheaper, robotic solutions? Is the future of science news going to be something akin to (e) Science News, a media website with no human writers or editors, powered by fully-automated artificial intelligence?

And, if AI entirely writes and presents the news – and even does so exactly the way each of us would like to read it as we’ve seen with social media echo chambers — what will this mean for democracy, which relies on shared knowledge, debate and disagreements, wonders Arun Vishwanath, a technologist associated with the Berkman Klein Center at Harvard University. “The problem with AI is not only that it will do things faster or better than human journalists, but it is also that we will trust it implicitly,” he writes.

The solutions, he proposes, might be forward-looking policy, and more media transparency and media literacy to prevent AI dissecting reality into many tiny worlds of alternative facts created by targeted media content.

Pecquerie, meanwhile, seems to invite us to embrace the changes and make the most of them. AI offers a host of solutions that can help us do more diverse journalism, be it integration of text and multimedia, or experimenting with voice-generated AI to make ‘conversational journalism’ for hardware such as Alexa. He envisages AI being used as newsroom assistant; an assistant we can ask things, that could alert us to breaking stories and trending topics, perhaps fact-check our work and help us get the best possible reach online with the end product. For science news, it might mean predicting the most important research papers to report on this week, and what parts of those papers we should focus on to find the really newsworthy issues – the sort of expertise that takes reporters and editors years to develop.

AI might even draft interviewing questions for us, and allow us to produce many more stories thanks to automating all parts of the reporting process, from data collection to transcription and production.

And, perhaps worryingly, Pecquerie thinks AI will allow newsrooms to produce a lot of content with a limited number of curators. Jobs might transform from reporters and editors to curators and AI overseers. Fake news, too, will become ever easier to create and promote, weakening democracy and scientific literacy.

So we’re looking at a major disruption of a science newsrooms thanks to AI advances: a disruption that promises to transform our work, in both positive and negative ways. Where we’ll end up, no one knows. But it’s up to us to embrace the changes and try to guide AI applications towards positive uses that will help science journalists do a better job. Ignoring the coming changes won’t stop them.

Perhaps one place to start would be to think about the problems and challenges you face as a science writer, and brainstorm solutions that machine learning and smart AI could help you with. And then, as any science writer should have a statistician on call to help them make sense of research data, maybe we should each have an AI expert on call – because the only way to use the tech to improve journalism will be for journalists and AI engineers to combine forces and insights in their respective fields.



Mico Tatalovic is a science journalist from Rijeka, Croatia. Over the last decade, he has worked as a science news editor at SciDev.Net and New Scientist in London, UK. He is currently the chairman of the Association of British Science Writers, and is a board member of the Balkan Network of Science Journalists. He is also a Knight Science Journalism fellow at MIT 2017/2018 where he is researching artificial intelligence applications to science journalism.

Comments

  1. What would also be interesting is if AI eventually placed limits on the words a journalist used to describe something. A "break though" would have to have a higher degree of certainty than "an advance" and that would be higher than a "new finding." Indeed, maybe there would be some image on the page which gave readers a visual expression of this certainty and uncertainty. This "might" address what seems like an never-ending fake news issue in science journalism. It’s the "Tuesday scientists have found that carrots help prevent cancer" followed by “On Wednesday it turns out cauliflower is better than carrots” followed by “on Thursday a new study shows neither one makes much difference”. This leads to a sort of perpetual intellectual do-si-do which in turn feeds into a skepticism about science in general. And rightly so.

    ReplyDelete
    Replies
    1. Perhaps it could be implemented within a newsroom by sub editors, or editors, who could then let the algorithm police their pet peeves automatically. An interesting idea. That's my argument really, that we should think of different ways we could harness this technology to improve the things we do.

      Delete
  2. The "breakthrough" word seems to appear more often in press releases than research papers. Can AI compare reports against papers for language like that?

    My addition to the "Can AI deal with this one?" hit list is newspaper articles that carry the headline, "scientists say". (A variation on the "scientists have found" mentioned above.) Often, of course, they are not scientists.

    ReplyDelete

Post a Comment