In spite of the fact that the United States presumably have the safest food safety system in the world, the statistics on foodborne diseases in the US is alarmingly high. According to the Center for Disease Control (CDC) 2011 report, about 48 million or 1 in 6 Americans contract a foodborne illness each year. Of this number about 128,000 people are hospitalized and 3000 people die. Worldwide, 31 foodborne hazards was responsible for 600 million foodborne illnesses and 420,000 deaths in 2010 according to the World Health Organization (WHO).

Norovirus is the cause of most foodborne illnesses in the US while Nontyphoidal Salmonella is responsible for the most hospitalizations. Five other foodborne disease agents along with norovirus account for 90% of all domestically acquired foodborne illness, hospitalization and deaths in the US. These include Campylobacter spp., Clostridium perfringens, Escherichia coli 0157, Lysteria monocytogenes, Nontyphoidal Salmonella, and Toxoplasma gondii. These statistics are unfortunate given that foodborne diseases are preventable. The study of foodborne disease epidemiology is helpful in enabling us to understand the nature of disease and how they spread. Epidemiologists answer such questions as,

  • Who became ill?
  • When did the symptoms appear?
  • Where did the illness occur?
  • How many people became sick?
  • What was the demographics of those who fell ill e.g. age, gender, race, ethnic group?

The data collected may be used to create epidemic curves like the one below showing the number of cases versus the date people became ill. Epidemic curves provide information on how diseases progress and trend over time, the magnitude of the disease and incubation time of the infectious agent. This allows epidemiologists to develop hypotheses on the cause, patterns and possible risk factors for the disease. 

Example of an epidemic (epi) curve during a multistate outbreak investigation of Salmonella Heidelberg infections, 2013-2014.
Example of an epidemic (epi) curve during a multistate outbreak investigation of Salmonella Heidelberg infections, 2013-2014

However to to test these hypotheses, epidemiologist turn to analytical epidemiology such as cohort and case-control studies. A cohort study is a study of disease in a defined group called a cohort e.g. people who attended a certain wedding. It is used to predict the relative risk, or likely future outcome due to exposure. This can be found by calculating the probability of getting sick if exposed divided by the probability of getting sick if not exposed. For example 100 people went to a wedding. Forty people got sick. Of these 40 individuals, 35 of them ate potato salad. The remaining 5 who got sick did not have any of the potato salad. This makes a total of 100-35 people who did not have chicken salad. Therefore, the probability of getting sick after exposure to chicken salad is 35/40, and the probability of getting sick without any exposure to chicken salad is 5/65. So what’s the relative risk? 

This image has an empty alt attribute; its file name is Probability-1.png

This means that people who ate potato salad are 11.4 times more likely to get sick in the future as a result, than those who did not. Therefore there is a very strong positive association between eating potato salad and getting sick. If this number was 1 then there would be no association. That is, you could not argue that the potato salad was the cause. If the number was less than 1 it would mean that there was a negative association. So people who ate potato salad would actually be protected against the illness. 

Unlike a cohort study that predicts outcomes looking forward into the future, a case-control study looks back in the past by comparing a case group who got sick with a disease and a control group that did not get sick. It uses the odds ratio instead of relative risk. It is calculated as the odds of being exposed to the disease in the case group with the odds of being exposed to the disease in the control group. For example, let’s say 50 people who ate chicken at a party got sick. This is the case group. They can be compared with a control group of people who attended the same party and did not get sick. This is the control group. Let’s say of these 50 sickness-free people, 10 of them ate chicken and 40 did not. The odds ratio, that is, the odds that someone who got sick, got sick because of the chicken, can be calculated by determining the odds of getting sick in the case group divided by the odds of getting sick in the control group, or [(1-p)/p] for the case group divided by [(1-p)/p] for the control group. 

More simply, the odds ratio can be determined using a two by two table as shown below indicating exposure and outcome numbers for each group.

If you label the squares as A, B, C, and D as shown below, the odds ratio will end up being (A/C) x (D/B) 

Therefore, 

What this means is that the people who ate chicken were 2.7 times more likely to have eaten chicken than those who did not. Therefore this strong positive association can help us conclude that chicken was the cause of the illness. If this number was 1 then there would be no effect and less than one means that the chicken provided protection from getting sick. 

Gathering data from people on what they ate can lead to erroneous data as it depends on accurate personal accounting. Many people interviewed may not remember what they ate. Therefore epidemiologists also use information from molecular studies to get more precise and objective data. This may involve the extraction of DNA, copying DNA using polymerase chain reaction (PCR), separating and comparing DNA using electrophoresis. This molecular epidemiology allows epidemiologist to be able to determine if the disease is caused by the same strain of pathogen as the suspected source pathogen. This technique is an important part of a foodborne disease surveillance system used to keep track of geographical (place) and temporal (time and seasons) trends in the distribution of disease agents. The infomation generated is critical to regulators, policy makers and health educators in detecting outbreaks, location and extent of the outbreak, cause and risk factors, and whether or not to recall food. Information gathered from surveillance may reveal the type of epidemic. This will dictate mitigation efforts. Types of epidemic may be point source, continuous intermittent, or propagated. What’s the difference? Point source, continuous and intermittent epidemic is associated with a common source while propagated epidemic is associated with multiple sources.

In point source epidemic, the disease suddenly occurs and then tapers off and goes away. It is the result of contamination from a single source. For example, you and your friends went to a potluck and you all got sick from eating contaminated egg. The onset time for the symptoms will be different for each person  since we all respond differently to infection. Hence you will see the disease popping up at different times and then tapering off as the last person to show the symptoms gets better. However the time that the symptoms are expressed will be narrow and consistent with the incubation time of the pathogen. 

In continuous epidemic, the infection occurs continuously at a high rate in a community over a long time (beyond incubation time of the disease). For example, from drinking contaminated water from the common well that supplies the town. 

In intermittent epidemic, people are infected sporadically over time as they are exposed to the contaminated food or water at random with very little predictability of when they will be exposed.

In propagated epidemic, the disease tends to spike higher and higher as individuals who are infected pass on the disease to others. This is much like the case of the spread of COVID-19.   

Source: Outbreak toolkit

Being able to create these curves for decision making of course, relies heavily on reporting. Steps towards making reporting possible include:

  1. Getting sick
  2. Seeking medical care
  3. Doctor requesting specimen e.g. stool for testing
  4. Patient agreeing to get tested and then getting the test

Only after this can the doctor report the illness. You can see the challenge here. First of all many who get sick don’t go to the doctor. If they do, the doctor may not ask for a test specific to the disease. If the doctor does suggest getting tested, the patient may not choose to get tested. Therefore the number of cases that gets reported in all disease incidences, are usually woefully lower than the actual cases. 

Sadly, foodborne disease carry a significant toll on individuals and the society. These can be measured in both monitory and non-monitory terms. Monetary measurements could include:

  1. Medical expenses
  2. Lost wages
  3. Total value individuals report as their willingness to pay (WTP) to reduce suffering and risk of death
  4. Cost of action taken to protect illness 

Non-monitory measures quantifies how the disease affects quality of life. Two commonly used ones are quality-adjusted life year indices (QALY) and disability-adjusted life year indices (DALY). QALYs are a measure of years lived in perfect health. It ranges from 0 to 1. A QALY of 1 means 1 year of perfect health, and 0 means death. The QALY is commonly used to determine if a medical intervention is worth it by assigning a number to the quality of life gained as a result of the intervention. For example, should tax payers pay for the development of a new drug that will only add 6 months of good health out of the year (0.5 x 1 = 0.5 QALYs)? DALYs are a measure of years lost due to ill health or premature death. It also range from 0 to 1, but with 0 being perfect health (no life lost) and 1 being a 1 whole year of life lost due to death. 

Measuring the cost of foodborne disease is important to regulators but equally or even more important is understanding how these costs can be prevented in the first place. This is where risk analysis comes in. Risk analysis involves identifying the potential risk for disease and their severity (risk assessment), managing these risks to prevent or lower injury (risk management) and sharing relevant information about the disease, their cause and how to control them (risk communication).

Today’s food systems rely on risk analysis strategies to keep food safe rather than leaving it up to chance. Various food safety management systems have been developed and implemented for this purpose. In the US, HACCP (hazard analysis critical control points) is required for all food plants producing meat and meat products, while HARPC (hazard analysis and risk-based preventative controls) system is required for other foods. Both systems require identification of potential food hazards (physical, chemical and biological), assessment of the likeliness and severity of occurrence, implementation of management practices to reduce and control them, and keeping careful records. 

Reference: Taylor, S. L. (2017). Disease processes in foodborne illness. In C. E. R. Dodd et al. (Eds.), Foodborne diseases (3rd ed.). Philadelphia, PA: J.B. Lippincott. 

Courtney Simons
Administrator
Courtney Simons is a food science writer. He holds a BS degree in food science and a PhD in cereal science from North Dakota State University.
Courtney Simons on FacebookCourtney Simons on LinkedinCourtney Simons on PinterestCourtney Simons on Twitter