Debate Text Mining: Trump Has a Cousin in Perot

Debate Text Mining: Trump Has a Cousin in Perot
AP Photo/Marcy Nighswander, File
X
Story Stream
recent articles

Donald Trump’s candidacy is unprecedented. The Republican nominee’s supporters, his critics and we data nerds who are just trying to get our heads around this election would all agree with that statement for a variety of reasons. And some of the singular aspects of the Trump campaign -- his dominance of media coverage, the controversies that do and don’t stick to him, his unconventional ground game (or lack thereof) -- make it hard to compare him to recent past candidates.

So when I started collecting past presidential debate transcripts and analyzing them using text mining (a set of statistical tools designed to help make inferences about bodies of text), I expected that Trump’s rhetoric would differ starkly from that of past candidates. His word choice stood out from his opponents’ in the GOP primary, so it made sense that it might also set him apart from recent general election candidates.

And in many cases it does -- but there’s some indication that Trump is also echoing 1992 and 1996 third-party candidate Ross Perot.

Trump’s Nearest Relative -- Ross Perot?

Simply watching Trump’s and Perot’s speeches suggests that there’s some commonality between them. One of Trump’s key issues is trade, and Perot famously said trade agreements would create a “giant sucking sound” of jobs leaving the United States. Like Trump, Perot was also more plain-spoken than his competitors. It doesn’t take a great amount of training to notice the difference between Perot’s or Trump’s manner of speaking and that of a more “patrician” politician like George H.W. Bush or John Kerry.  

The intuitive, qualitative differences between candidates over the last two-plus decades also shows up when you feed their debate transcripts into text mining algorithms. The data was cleaned up quite a lot before being processed. For example, articles, conjunctions and other common and primarily structural words were removed; suffixes were often deleted so words such as “going” and “gone” would revert to the root word “go”; and words that were unique to only a couple of candidates were taken out. I also controlled for candidates being in different numbers of debates and then calculated the frequency with which each candidate used each remaining word. Nonetheless, the algorithms detected some interesting patterns.  

A method called hierarchical clustering showed that Trump often stood apart from all other candidates but did at times sound like Perot. Though “hierarchical clustering” is a mouthful, the concept is rather simple. Take a group of things (in this case, processed versions of what each presidential candidate said in general election debates since 1992), calculate some measure of how different these things are from each other and then use those statistics to divide these things into groups and subgroups. The groups and subgroups can then be laid out in a dendrogram:

Dendrograms are a bit like family trees -- and in this case, every person represented is a member of the “family” of presidential candidates. Trump and Perot could be seen as one side of the family, and the other eight candidates could be seen as the other side. In this case, every member of the Kerry-Romney-McCain-Obama-Clinton-Gore-Bush group is just as related to Perot as they are to Trump. Within the larger side of the family, Kerry and Romney have their own subfamily distinct from McCain, Obama, Gore, the Clintons and the Bushes. In other words, the dendrogram displays how these candidates are best categorized into groups and subgroups based on the frequency with which they used various words at their debates. More details on how to read dendrograms (such as how to interpret “height” and an explanation of the inner workings of the method) can be found here.

According to the dendrogram, Trump and Perot belong together. Further math shows that this cluster is statistically significant. To put that in grossly oversimplified terms, they are much more similar to each other than two randomly chosen candidates might be. This suggests that there is some resemblance between what Trump and Perot said in the debates (I will go into further detail on this in the next section). Similar results often held true when candidates from 1976 onwards were included.

It’s notable that John Kerry and Mitt Romney were also in a statistically significant cluster, which could reflect their shared upscale, Northeastern, high-browed style of communication.

It’s important to note that hierarchical clustering didn’t always show that Trump and Perot were in the same cluster. In some cases, adjustments beneath the hood of the algorithm (e.g. using a different metric of similarity between candidate debate transcripts) led to slightly different results. But for almost all reasonable starting assumptions, Trump was either in a cluster with Perot or had his own statistically significant cluster but with Perot (or his cluster) close by. Kerry and Romney also often had their own cluster; I didn’t see any consistent, statistically significant pattern among the other candidates.

Other methods also suggested that Trump and Perot used similar language. I used another clustering algorithm called k-means to break the candidate debates speeches into discrete groups. Think of this less like building a family tree and more like how high school students self-sort by social group when they enter the cafeteria. In other words, the algorithm divided the candidates into a set number of groups based on which candidates spoke most similarly.

If the algorithm was set to break the candidates up into two groups, it showed Trump and Perot in one group and the rest of the candidates in the other. And when it was set to break them into three groups, it showed Perot and Trump in one group, Kerry and Romney in the other and the rest in a third group. In other words, dividing candidates up by word-usage would put Trump and Perot at one lunch table, maybe Romney and Kerry at another, but everyone else somewhere else.

This result wasn’t as strong or convincing as the results I got using c-means on the GOP primary candidate announcements speeches (see here) but it added to the results from hierarchical clustering and lent support to the idea that Perot may, in some limited ways, be the best recent precedent for Trump.

A Perot-Trump Connection Makes Some Sense

Trump and Perot are obviously different candidates running in different eras. The set of issues facing the country now is different from that of the 1990s, and it would be easy to point out numerous dissimilarities between Trump and Perot in both style and substance. But there seem to be some similarities that were picked up by these algorithms.

Trump has focused on trade for much of his campaign, promoting protectionist policies and criticizing free-trade agreements. Perot was also worried about the potentially harmful effect of NAFTA (see the aforementioned “giant sucking sound”). This shared policy disposition might influence their choice of language and thus cause these algorithms to place them together.

Both men are also plain-spoken compared to their rivals. Last year Josh Katz of the Upshot used quantitative measures to show that Trump’s language was much simpler than that of his opponents in the GOP primary. I haven’t done a similar analysis of Perot’s language, but there’s a tangible difference between the level of accessibility of a Perot speech and one of George H.W. Bush’s speeches. Note that this isn’t a criticism -- if a fancier word or a more complex sentence structure doesn’t help communicate meaning, it’s often not worth the extra breath or ink. Trump and Perot have realized this and their more direct manner of speaking helped them reach some voters.

Finally, Trump and Perot share a distrust of elites. Trump won the Republican nomination by essentially running against the party establishment and its values, and Perot ran as an independent in 1992 (he was the Reform Party nominee in 1996). Neither candidate showed a great love for political institutions (although Perot’s tone was significantly less hostile than Trump’s), and the algorithm may have picked that up as well.

This doesn’t mean that Trump will necessarily win Perot voters or those who fit that profile. But if he does, then his language and similarity to Perot could be an interesting data point.

David Byler is an elections analyst for RealClearPolitics. He can be reached at dbyler@realclearpolitics.com. Follow him on Twitter @davidbylerRCP.

Comment
Show commentsHide Comments