In this project, we attempt to answer the question, "what 'types' of firms were hurt more or less by covid?" First, we identify risk factors that may play a role in deciding what these firm 'types' are. Then, we perform very basic natural language processing on the 10-Ks of S&P 500 firms to detect which of them are most susceptible to these risks that we have identified. Finally, we perform the analysis and correlation below to make our conclusions.
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
firms_df = pd.read_csv('output/sp500_accting_plus_textrisks.csv')
The risk measurements this project identifies are competition, litigation, and supply chain. To measure each, key words that might be near each other in a 10-K to imply that a company would have such a characteristic are searched for throughout each S&P 500 firm's 10-K. Every time a match of these keywords is found, a 'hit' is recorded. The sum of these hits is then saved in the firms_df data frame.
These risk measurements were chosen to identify weaknesses within companies that may have played a role in their success or decline during the first stage of the COVID-19 lockdown in March of 2020. I suspected that firms in highly competitive markets, high litigation concerns, or already-present weaknesses in the supply chain would likely be affected by the added pressure of the impending pandemic.
In calculating the risk measurements based on the firm's 10-K, the number of hits per firm varied by metric. Some risk measurements returned a number of hits between about 0 and 6, while others had a higher end of about 20. This presents reasonable data, as some firms likely do not have significant problems with some of the metrics, so nothing regarding them appears in the firm's 10-K.
Let's discuss the methodology used to find keywords in each of the firms. This process was repeated for every risk measure keyword search. We will use the search for competitive firms as an example.
To find out which firms have high competition, I search for the strings "competition", "competitor", or "compet" within 5 words of either "with", "against", "great", "intens", "risk", or "susceptible" as seen in the line of code below.
firms_df.at[index, 'compet_hits'] = len(re.findall(
NEAR_regex(['(competition|competitor|compet)','(with|against|great|intens|risk|susceptible)'],5,partial=True),text))
This uses the "NEAR_regex" function to iterate through the text of the 10-K and find instances of the words in question matching up. It then sums them and adds that total to the "compet_hits" column of the firms_df data frame.
Let's look at an example from Accenture's 10-K. The phrase reads, "There is intense competition for scarce talent with market-leading skills and capabilities in new technologies...." The code above recognizes this sentence as an indication that Accenture is a highly competitive firm, so it adds a "hit" to the column, "compet_hits".
To find out which firms have litigation concerns, I use the following line of code.
firms_df.at[index, 'litigat_hits'] = len(re.findall(NEAR_regex(['(litigat|lawsuit)','(significant|concern|weakness|liabil|vulnerab|against)'],5,partial=True),text))
This implements the same function as we saw above with competition. Let's look at an example of this working. In AES's 10-K, it states, "There is ongoing uncertainty, and significant litigation, regarding...." This sentence will produce an intended hit for the litigation hits column.
To find out which firms have litigation concerns, I make 3 separate searches to observe the risk measure in multiple ways. The code for this is below.
# find and save supply chain hits 1
firms_df.at[index, 'sply_ch_hits1'] = len(re.findall(NEAR_regex(['(suppl|supply chain)','(concern|weakness|liabil|vulnerab|risk|susceptible|challeng|chang)'],5,partial=True),text))
# find and save supply chain hits 2
firms_df.at[index, 'sply_ch_hits2'] = len(re.findall(NEAR_regex(['(resource|material)','(scarc|difficult|challeng|chang)'],5,partial=True),text))
# find and save supply chain hits 3
firms_df.at[index, 'sply_ch_hits3'] = len(re.findall(NEAR_regex(['(supply|suppli)','(bankrupt|fail|failure|failed|difficult|competition|challeng|chang)'],5,partial=True),text))
The first line of code looks for when "supply" or "supply chain" is near a word that indicates a weakness being described. The second line finds instances of when a resource or material is scarce or challenging to acquire. Finally, the third line of code looks for when supply or a supplier indicates bankruptcy, fails, has difficulty, or is challenged in some way. Each of these produces hits at instances of phrases that indicate a supply chain issue, similarly to how they did for the competition and litigation examples.
Our final sample is the data set in firms_df. Let's analyze this data to ensure that it makes sense. First, we will look at the first five rows and the shape of the table.
firms_df.head()
Symbol | Security | SEC filings | GICS Sector | GICS Sub-Industry | Headquarters Location | Date first added | CIK | Founded | compet_hits | ... | prof_a | ppe_a | cash_a | xrd_a | dltt_a | invopps_FG09 | sales_g | dv_a | short_debt | wk_rets | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | MMM | 3M | reports | Industrials | Industrial Conglomerates | Saint Paul, Minnesota | 1976-08-09 | 66740 | 1902 | 1.0 | ... | 0.193936 | 0.228196 | 0.065407 | 0.042791 | 0.408339 | 2.749554 | NaN | 0.074252 | 0.143810 | -0.077905 |
1 | AOS | A. O. Smith | reports | Industrials | Building Products | Milwaukee, Wisconsin | 2017-07-26 | 91142 | 1916 | 5.0 | ... | 0.177698 | 0.193689 | 0.180314 | 0.028744 | 0.103303 | NaN | NaN | 0.048790 | 0.056170 | -0.028109 |
2 | ABT | Abbott | reports | Health Care | Health Care Equipment | North Chicago, Illinois | 1964-03-31 | 1800 | 1888 | 2.0 | ... | 0.118653 | 0.132161 | 0.060984 | 0.035942 | 0.256544 | 2.520681 | NaN | 0.033438 | 0.088120 | -0.001101 |
3 | ABBV | AbbVie | reports | Health Care | Pharmaceuticals | North Chicago, Illinois | 2012-12-31 | 1551152 | 2013 (1888) | 9.0 | ... | 0.178107 | 0.037098 | 0.448005 | 0.076216 | 0.709488 | 2.211589 | NaN | 0.071436 | 0.057566 | -0.038844 |
4 | ABMD | Abiomed | reports | Health Care | Health Care Equipment | Danvers, Massachusetts | 2018-05-31 | 815094 | 1981 | 16.0 | ... | 0.225749 | 0.137531 | 0.466354 | 0.088683 | 0.000000 | 12.164233 | NaN | 0.000000 | NaN | -0.090781 |
5 rows × 54 columns
firms_df.shape
(505, 54)
This data set has 505 rows, which makes sense, since it began with the original S&P 500 firm data from input/sp500_firms.csv
, which also has 505 firms. Now, lets look at the risk measurement hit counts.
firms_df[['compet_hits', 'litigat_hits', 'sply_ch_hits1', 'sply_ch_hits2', 'sply_ch_hits3']].describe()
compet_hits | litigat_hits | sply_ch_hits1 | sply_ch_hits2 | sply_ch_hits3 | |
---|---|---|---|---|---|
count | 492.000000 | 492.000000 | 492.000000 | 492.000000 | 492.000000 |
mean | 10.231707 | 4.032520 | 2.947154 | 3.817073 | 1.784553 |
std | 7.063515 | 5.196638 | 2.686856 | 3.110175 | 2.274969 |
min | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
25% | 5.000000 | 1.000000 | 1.000000 | 2.000000 | 0.000000 |
50% | 9.000000 | 3.000000 | 2.000000 | 3.000000 | 1.000000 |
75% | 14.000000 | 5.000000 | 4.000000 | 5.000000 | 3.000000 |
max | 40.000000 | 63.000000 | 24.000000 | 19.000000 | 17.000000 |
Given the count, almost every firm has a hit recorded for it. The reason why not all of them have a value for hits is that only 492 of the firms had 10-Ks to download. Of course, if there is no 10-K for a firm, its text cannot be iterated through.
Next, let's look at some of the accounting data and our calculated weekly returns.
firms_df[['gvkey', 'Ch_Cash', 'short_debt', 'wk_rets']].describe()
gvkey | Ch_Cash | short_debt | wk_rets | |
---|---|---|---|---|
count | 355.000000 | 355.000000 | 349.000000 | 490.000000 |
mean | 45305.952113 | 0.008871 | 0.112481 | -0.121810 |
std | 61170.060945 | 0.064776 | 0.111168 | 0.090491 |
min | 1045.000000 | -0.315808 | 0.000000 | -0.610145 |
25% | 6286.000000 | -0.007922 | 0.028043 | -0.159331 |
50% | 13700.000000 | 0.003967 | 0.084992 | -0.106716 |
75% | 61582.500000 | 0.023910 | 0.151231 | -0.064739 |
max | 316056.000000 | 0.383711 | 0.761029 | 0.117748 |
As seen here, there is a notable amount of accounting data missing. For weekly returns, 490 of the firms had the data available to calculate. A mean weekly return of -0.12 makes sense, since that is about how much the S&P 500 lost that week.
Missing data is certainly an issue when attempting to perform this project's goals. The program to iterate through 10-Ks has to take into consideration when 10-Ks do not exist for a given firm. The same problem of missing data needs to be accounted for in merging tables throughout the code. In every instance where a merge takes place, the code performs a left merge into the original firms_df table. That way, no data is lost or affected by a lack of data in the other table being merged. one issue that was fixed in development was the problem where firms without 10-Ks were having hits recorded from the previous firm's hit count rather than simply having a null value. Also, in computing the correlation, not every firm has a weekly return, and not every firm has a hit count. This issue is taken care of automatically inside the .corr()
method.
Let's calculate and look at our correlations.
correlation = firms_df[['compet_hits', 'wk_rets', 'litigat_hits', 'sply_ch_hits1', 'sply_ch_hits2', 'sply_ch_hits3']].corr()
corr_rets = correlation[['wk_rets']].drop('wk_rets')
corr_rets
wk_rets | |
---|---|
compet_hits | 0.059983 |
litigat_hits | 0.025986 |
sply_ch_hits1 | -0.027136 |
sply_ch_hits2 | -0.150963 |
sply_ch_hits3 | -0.026279 |
ax = sns.barplot(data=corr_rets.T)
plt.title("Correlations to Week's Return")
plt.xlabel("Risk Measure") # Q2
plt.ylabel("Correlation") # Q3
Text(0, 0.5, 'Correlation')
There is a very weak positive correlation between the returns of the week of March 9, 2020 and competitive firms. This may suggest that there is a small indication that more competitive firms had a more positive return that week than the majority of firms; however, this correlation is not strong enough to make such a conclusion.
This measurement draws a similar but even weaker conclusion than competition. There is a very weak positive correlation between the returns of the week of March 9, 2020 and firms with a higher litigation concern. This may suggest that there is a small indication that firms with more litigation concerns had a more positive return that week than the majority of firms; however, this correlation is by no means strong enough to make such a conclusion.
There is a very weak negative correlation between the returns of the week of March 9, 2020 and firms with more supply chain concerns. This may suggest that there is a small indication that firms with weaker supply chain reliability had a more negative return that week than the majority of firms. Intuitively, such a conclusion may make sense, and having all three measurements resulting in a negative correlation may help to some extent. However, once again this correlation is not strong enough to make any kind of a definitive conclusion.
While the results did not exactly produce the kind of conclusion we may have been looking for, the project could be refined to gather better results in a number of ways. First, more time spent analyzing the language of the 10-Ks could help produce better keywords to use in text processing. Also, including more keywords and variations in keywords might help produce higher hit counts. With higher hit counts, perhaps a better correlation could be made, since fewer hits will be zero.
The other possibility is that these metrics did not have a very high impact on the variability of the stock prices of these firms. The most likely conclusion one could possibly make from this data if absolutely necessary is the one about supply chains, since all of the measurements resulted in a negative correlation, and the second measurement had a magnitude 0.15 correlation, which was the strongest I found. However it is by far most accurate to simply say that no conclusions can safely be made.