Artificial intelligence, or the development of “smart” machines that can perform some of the same tasks as humans, plays a hidden but significant role in our daily lives. When you type half a phrase into the Google search bar and Google suggests several ways you might finish it, an algorithm is at play. Ditto when you purchase flights online, order an Uber ride, or check your email’s spam filter. When you order a large Veggie Supreme through a pizza chain’s app and it remembers your order for next time, machine learning just kicked in. Ditto when Facebook recommends tagging yourself in a photo because it recognizes your face or when Amazon reminds you to buy more of a product.
And of course, artificial intelligence also plays a role in the news you are exposed to and consume every day. When it comes to journalism, AI is most commonly used in processes that involve automating basic tasks, such as recommending content, analyzing large datasets, interacting with users, or in some cases, writing short news briefs. It is thus a type of technological actant whose role in journalism is becoming increasingly important.
One simple example of AI in journalism is the use of recommendation algorithms in online news outlets. When you visit a news outlet’s website, you will likely see several widgets on the sides of an article directing you to other articles you might consider reading next. The recommendations that appear in those widgets may be tailored specifically to you, based on the stories you’ve read on that outlet’s site before or even elsewhere on the Web. For example, if you usually click on political news stories or articles about the Patriots, a news outlet’s algorithm might track your behavior over time and learn to point you toward more stories about politics or the Patriots.
Notably, news algorithms are not the only ones that affect journalism. Social media platforms, though they are not news outlets themselves, also have algorithms built into their sites that influence how many people—and what kinds of people—see the news stories shared on those platforms. And as human developers adjust those algorithms over time, news outlets have to be on the lookout for changing priorities. For example, if the social media algorithms are tuned to deemphasize headlines that feature questions, then news outlets may opt to adjust their headline-writing to not miss out on important web traffic.
Within journalism, The New York Times, The Texas Tribune, and numerous other outlets have applied AI to the creation of news chatbots that foster more conversational and interactive news consumption experiences for audiences. The New York Times uses artificial intelligence to monitor reader comments on news stories and flag (and even outright remove) those that violate its digital standards. The BBC uses AI to aggregate news about specific topics and create compendiums of information for journalists covering those topics. Reuters uses AI to create compelling data visualizations to display information for audiences.
Journalists also regularly rely on artificial intelligence to transcribe interviews recorded in audio or video files. A number of different digital tools allow journalists to upload their multimedia recordings and later download documents with text transcriptions of the entire interview. Those transcripts are not perfect, of course, and do still require human oversight to ensure that any quotes in a story are accurate. However, such algorithms are routinely used by journalists to make their interviews easily searchable.
Another example of artificial intelligence in journalism is the practice of automated news, or the use of machines to automatically generate news stories about recurring topics. For example, the Los Angeles Times famously developed Quakebot to record and document local earthquakes with short news pieces published right after the earthquakes occurred. Today, many news outlets employ similar bots or machine-learning processes to craft news stories about dedicated topics that are deemed appropriate for machine-crafted coverage. Bloomberg, the Washington Post, and the BBC all employ their own in-house automated storytelling tools.
However, only a small segment of news stories is mined for automation, and those stories generally rely on significant pre-existing data from which AI can draw information to create a news story. And the articles that result are relatively basic: They express the facts in clear, no-nonsense sentences with surface-level analysis. They do not feature compelling ledes or writerly panache, and they don’t win Pulitzers.
To illustrate this, consider two subjects that feature especially heavily in automated journalism: financial journalism and sports journalism. News agencies like The Associated Press and Reuters produce tens of thousands of automated financial news briefs each year by mining quarterly filings with the Securities and Exchange Commission, identifying the most significant aspects of the filing, and placing that information into one of several story templates based on some algorithmic logic. Similarly, automated journalism is frequently used to generate sports game recaps based on algorithmic interpretations of the play-by-play information and historical data. However, an algorithm would have a much harder time writing a story that gets at how a CEO’s sex scandal might affect their company or a feature on why a player decided to sit out the season due to health concerns. Put another way, news algorithms are already frequently used today but they currently come with significant limitations.
Why are these machine-crafted stories and experiences so valuable? Proponents of AI in journalism—and especially of automated journalism—believe that automating some of the industry’s most basic tasks and stories relieves human journalists of a burden and gives them more time to spend on more in-depth journalism. That’s because algorithms can generate news stories far more quickly than human journalists. Those stories can serve as the first documentation of an event or issue for audiences, and they may be only the first story about a topic a human journalists later follows up on. Additionally, they can be used to cover stories that human journalists simply lack the resources to cover themselves, such as the not-so-great high school football team in the outskirts of the county.
AI can also quickly personalize stories to the specific audiences or readers who will consume them. For example, if a weather event happens near you, automated news tools can apply machine-learning to personalize the story you see to include the specific weather warnings for your area and a personalized forecast and storm radius. Similarly, it can take a story about an issue with broad impact, such as climate change, and add in well-placed paragraphs that emphasize the impact in your area by detecting where you are logged in from.
AI can also serve as another set of eyes in the newsroom. This is especially useful when it comes to sifting through enormous datasets, such as document leaks, crime statistics, or spreadsheets with government expenditures. Rather than writing stories, algorithms can be used to simply notify journalists of significant trends or patterns that are worth exploring through human reporting.
Although it might be easy to see AI as a threat to journalism jobs—after all, we know from popular media that an age of machine overlords will come—it is important to underscore that creative industries like journalism are likely to continue to depend on human labor for the foreseeable future. First, all of the applications of AI that were just described were created by humans, and they require some degree of continued human oversight to not only work but to make sure that they fulfill their purposes in a way that is accurate, efficient, and helpful to audiences. (For example, human journalists often review and curate their sites' news recommendation algorithms.) Second, algorithms still rely on predictable data streams to work, so the information needs to already be digitized (and often in reliably structured ways) or exist in a way that can be easily and reliably digitized on the fly. This severely limits the domains that automation can cover. Third, automated stories still struggle to answer how and why questions, or to go deeper into stories. Human journalists are thus still needed to follow up on important stories to unearth information that an algorithm is simply incapable of getting at.
Although AI can benefit newsrooms and news audiences in a variety of ways, it can also be harmful. One of the most well-documented harms of artificial intelligence across technologies and applications is that of algorithmic bias. Because machine learning is created and developed by humans, it can be just as flawed as the humans behind it.
News coverage of the tech industry has repeatedly illustrated that platforms and the algorithms that equip them can have flaws built into their code. For example, Amazon created a hiring tool that didn’t support hiring women. And for a while, Google’s facial-recognition algorithm categorized Black people as gorillas. (Facial-recognition technology has repeatedly proven to identify men more easily than women—and white people more easily than people of color.) More recently, a U.S. medical algorithm that plays a role in making health care recommendations for more than 100 million Americans incorrectly categorized Black patients as healthier than white patients who were equally ill. These are only a few examples of the dangerous results of algorithmic bias in digital tools developed by people who do not embody the diversity of the populations they serve.
Within the context of journalism, algorithms can unintentionally reproduce problematic depictions and promote inaccurate stereotypes. For example, an algorithmically generated story about a decrease in the number of immigrants entering the United States may automatically embed a stock photo of immigrants being detained by immigration authorities. That, in turn, would promote and perpetuate the association between immigration and criminality, simply because the algorithm has learned that previous stories about immigration tended to focus on elements of legality and crime. Thus, journalistic actors must remain mindful of how they are employing artificial intelligence, and how such applications may advance or detract from their mission to represent truth.
Artificial intelligence is the development and use of “smart” machines to perform some of the same tasks as humans.
When it comes to journalism, AI is most commonly used in processes that involve automating basic tasks, such as recommending content, analyzing large datasets, and interacting with users.
Automated news involves the use of algorithms to automatically generate news stories. Today, some journalistic outlets produce tens of thousands of automated news stories a year, though such stories are typically restricted to domains like finance and sports.
Although it might be easy to see AI as a threat to journalism jobs, humans will likely continue to play a critical role in journalism for the foreseeable future.
Because news algorithms are created and developed by humans, they can be just as flawed as the humans behind them.