Tag

statistics

Browsing

I put this little smiley face here, so you’ll feel comfortable. And you do, right? Because if you’re a writer, you might not really want to think about math so much. And statistics? Yuck!

At least that’s how I once felt. Stats and I have never been great friends, but the more I learn about it, the more I like the subject. Really.

And that’s a really good thing, since much of my writing requires statistics.

No, I didn’t say sometimes uses or is strongly suggested. I said requires. And I meant it.

The reasons are pretty darned obvious, really. In fact there are three really good ones.

Good Stats Inspire Trust

When you add the right numbers to a story in the right ways, your readers will come right along with you. Readers don’t want to be overwhelmed with numbers, but well-placed figures will help your reader believe what you’re saying.

It’s not enough to say, for example, that incomes have risen in a particular geographic area. How do you know that? And if a source told you so, how can you confirm that information?

Of course, if your readers can’t trust the statistic, they can’t trust you either. It’s critical to learn how to judge data, how to assess if the numbers make sense. You’ve also got to learn how to sprinkle in the data without losing readers. (There’s an entire chapter in my book, Math for Writers, that shows you how.)

Good Stats Keep Your Story Honest

So many times, the story I end up with is not the story I expected. Here’s an example: when I set out to write about methamphetamine use in Maryland for my local alt-weekly, I never expected to find out what I did. Compared to West Virginia, Virginia and Pennsylvania, my state had shockingly low meth numbers. I looked at the information three ways to Sunday, and I came up with the same conclusion: Maryland did not have a meth problem.

If I hadn’t looked at the stats, I might have written a very different story. I could have focused on the very few meth lab busts in Maryland, painting the all-to-common and vivid picture of destruction and death. I could have interviewed a handful of gay men who were tweaking on the stimulant regularly.

Instead, I compared the number of meth lab busts, treatment centers, injuries and deaths to those in surrounding states and found the totals to be much, much lower. It was such an astonishing difference, I realized pretty quickly that I had a very different story on my hands. (And boy was that a blast to report and write!)

Even if you think you know the stats, get them and look at them carefully. Do careful comparisons with like numbers. That little bit of effort and thought will keep you honest.

By the way, stats keep editors honest, too. Many times, we writers are assigned stories by editors who think they know what the angle will be. I’ve had to prove to several editors that their assumptions are off base. Numbers help.

Good Stats Help You Land Great Stories

There are editors out there looking for writers who don’t run in the other direction when math enters the story. After turning in a numbers-heavy story to a national publication a few weeks ago, I asked the editor for more assignments like it. She was so happy to hear that I was willing to tackle stories that relied on statistics for the reporting. Her stable of writers is pretty darned thin when it comes to writers like me. That works in my benefit.

And if you can be the writer who finds a new angle on an old story, thanks to stats, you’ll be a hero to many editors. Digging into the numbers a little more than other writers can help you uncover a gem or two. Don’t take the press release at face value. Figure out if there’s more to the public relations pitch. Call up the researcher. Google the topic and make some comparisons. Then include those hard numbers in your query. Editors love that stuff.

Whether you love math or hate it, if you’re a non-fiction writer, you will use statistics. You might as well get a better handle on those numbers and what they really mean. Then you’re free to do what you really want to do: write.

Photo credit: mpclemens via Compfight cc

Yeah, yeah. I get it. You became a writer because you didn’t want to do math. You got into editing a general interest magazine, because you wouldn’t be required to remember the difference between mean and median. Or you decided to write novels, thanks to a horrific experience in your Math for English Majors class.

Only science writers need math, right?

So yeah, science writers are most likely going to geek out on statistical analysis or a super-cool line graph. But lots of us writers need math to help us rise to the tops of our fields. It’s no secret that I believe this. I wrote a book about it. 

In fact, for some writers — like business or health reporters — math is a pretty important skill. But even fiction writers can use a dose of math now and then. Let me break it down for you.

Business Writers

If your beat is businesses, you are probably pretty comfortable with the math that companies use to assess their financial health. This means understanding a little bit about percentages and statistical analysis. You know how to read an annual report, including the charts and graphs that illustrate what the company is trying to say.

At the same time, you probably have a healthy dose of skepticism, You know that statistics can be misleading. To really analyze a company’s status, you need to crunch the numbers yourself. Or at least question where they came from.

Health Writers

It seems that most health stories in magazines and newspapers hinge on a recent study or report. It’s clear when the writer and editor get the math behind that research — and when they don’t. If you’re a health writer, you know how to use those numbers so that your readers are not misled.

This means understanding something about sample size, or when a study’s sample is too small or just right. You also know to ask for the study itself, instead of depending only on the summary or (worse) a press release written by a PR person who doesn’t have a background in that field.

Book Authors

Whether you ghostwrite or pen books using your own name, a little bit of math can go a long way to being sure that you’re on the road to an actual book and making a little money. Even fiction writers can use math in this way.

You use formulas in a spreadsheet to help count down your words and stay on deadline. You use statistical analysis to demonstrate to a potential publisher or agent that people want to read your book. Your platform is not only based on the number of Twitter followers you have, but also how well your fans engage with you on social media.

So even if you were promised no math in your chosen career as a writer, a little bit of math can help. Thankfully, you won’t need a math degree or even a college statistics refresher to master these computations. Clearly you’re smart enough. You’re a writer!

Photo Credit: Toncu via Compfight cc

Need to brush up on your math skills? Check out my book, Math for Writers: Tell a Better Story, Get Published and Make More Money. And be on the lookout for my upcoming online statistics course for writers and journalists. In the meantime, if you have any questions, ask them in the comments section!

On Wednesday, we talked about sample bias, or ways to really screw up the results of a survey or study. So how can researchers avoid this problem? By being random.

There are several kinds of samples from simple random samples to convenience samples, and the type that is chosen determines the reliability of the data. The more random the selection of samples, the more reliable the results. Here’s a run down of several different types:

Simple Random Sample: The most reliable option, the simple random sample works well because each member of the population has the same chance of being selected. There are several different ways to select the sample — from a lottery to a number table to computer-generated values. The values can be replaced for a second possible selection or each selection can be held out, so that there are no duplicate selections.

Stratified Sample: In some cases it makes sense to divide the population into subgroups and then conduct a random sample of each subgroup. This method helps researchers highlight a particular subgroup in a sample, which can be useful when observing the relationship between two or more subgroups. The number of members selected from each subgroup must match that subgroup’s representation in the larger population.

What the heck does that mean? Let’s say a researcher is studying glaucoma progression and eye color. If 25% of the population has blue eyes, 25% of the sample must also. If 40% of the population has brown eyes, so must 40% of the sample. Otherwise, the conclusions may be unreliable, because the samples do not reflect the entire population.

Then there are the samples that don’t provide such reliable results:

Quota Sample: In this scenario, the researcher deliberately sets a quota for a certain strata. When done honestly, this allows for representation of minority groups of the population.  But it does mean that the sample is no longer random. For example, if you wanted to know how elementary-school teachers feel about a new dress code developed by the school district, a random sample may not include any male teachers, because there are so few of them. However, requiring that a certain number of male teachers be included in the sample insures that male teachers are represented — even though the sample is no longer random.

Purposeful Sample: When it’s difficult to identify members of a population, researchers may include any member who is available. And when those already selected for the sample recommend other members, this is called a Snowball Sample. While this type is not random, it is a way to look at more invisible issues, including sexual assault and illness.

Convenience Sample: When you’re looking for quick and dirty, a convenience sample is it. Remember when survey companies stalked folks at the mall? That’s a convenience or accidental sample. These depend on someone being at the right (wrong?) place at the right (wrong?) time. When people volunteer for a sample, that’s also a convenience sample.

So whenever you’re looking at data, consider how the sample was formed. If the results look funny, it could be because the sample was off.

On Monday, I’ll tackle sample size (something that I had hoped to include today, but didn’t get to). Meantime, if you have questions about how sampling is done, ask away!

Continuing with our review of basic math skills, let’s take a little look-see at statistics. This field is not only vast (and confusing for many folks) but also hugely important in our daily lives. Just about every single thing we do has some sort of relationship to statistics — from watching television to buying a car to supporting a political candidate to making medical decisions. Like it or not, stats rule our world. Unfortunately, trusting bad data can lead to big problems. 

First some definitions. A population is the entire group that the researchers are interested in. So, if a school system wants to know parents’ attitudes about school starting times, the population would be all parents and caregivers with children who attend school in that district.

sample is a subset of the population. It would be nice to track the viewing habits of every single television viewer, but that’s just not a realistic endeavor. So A.C. Nielsen Co. puts its set-top boxes in a sample of homes. The trick is to be sure that this sample is big enough (more on that Friday) and that its representative.  When samples don’t represent the larger population, the results aren’t worth a darn. Here’s an example:

Ever hear of President Landon? There’s good reason for that. But on Halloween 1936, a Literary Digestpoll predicted that Gov. Alfred Landon of Kansas would defeat President Franklin Delano Roosevelt come November.

And why not? The organization had come to this conclusion based on an enormous sample, mailing out 10 million sample ballots, asking recipients how they planned to vote. In fact, about 1 in 4 Americans had been asked to participate, with stunning results: the magazine predicted that Landon would win 57.1% of the popular vote and an electoral college margin of 370 to 161. The problem? This list was created using registers of telephone numbers, club membership rosters and magazine subscription lists.

Remember, this was 1936, the height of the Great Depression and also long before telephones  and magazine subscriptions became common fixtures in most families. Literary Digest had sampled largely middle- and upper-class voters, which is not at all representative of the larger population.  At the same time, only 2.4 million people actually responded to the survey, just under 25 percent of the original sample size.

On Election day, the American public delivered a scorching defeat to Gov. Landon, who won electoral college votes in Vermont and Maine only. This was also the death knell for Literary Digest, which folded a few years later.

This example neatly describes two forms of sample bias: selection bias and nonresponse bias. Selection bias occurs when there is a flaw in the sample selection process. In order for a statistic to be trustworthy, the sample must be representative of the entire population. For example, conducting a survey of homeowners in one neighborhood cannot represent all homeowners in a city.

Self-selection can also play a role in selection bias. If a poll, survey or study depends solely on participants volunteering on their own, the sample will not necessarily be representative of the entire population. There’s a certain amount of self-selection in any survey, poll or study. But there are ways to minimize the effects of this problem.

Nonresponse bias is related to self-selection. It occurs when people choose not to respond, often because doing so is too difficult. For this reason, mailed surveys are not the best option.  In-person polling has the least risk of nonresponse bias, while telephone carries a slightly higher risk.

If you’re familiar with information technology, you know the old adage: Garbage in, garbage out. This definitely holds true for statistics. And this is precisely why Mark Twain’s characterization of number crunching — “Lies, damned lies and statistics” — is so apropos. When the sample is bad, the results will be too, but that doesn’t stop some from unintentionally or intentionally misleading the public with bad stats. If you plan to make good decisions at any point in your everyday life, well, you’d better be able to cull the lies from the good samples.

If you have questions about sample bias, please ask in the comments section. Meantime, here are the answers to last Wednesday’s practice with percentage change problems: –2%, 7%, –6%, –35%. Friday, we’ll talk about sample size, which (to me) is a magical idea. Really!

Most of you are probably sick to death of Political campaign polls. But these numbers have become a mainstay of the American political process. In other words, we’re stuck with them, so you might as well get used to it — or at least understand the process as well as you can.

Last Friday, I wrote about how the national polls really don’t matter. That’s because our presidential elections depend on the Electoral College. We certainly don’t want to see one candidate win the popular vote, while the other wins the Electoral College, but it’s those electoral votes that really matter.

Still, polls matter too. I know, I know. Statistics can be created to support *any* cause or person. And that’s true. (Mark Twain popularized the saying, “There are lies, damned lies, and statistics.”) But good statistics are good statistics. These results are only as reliable as the process that created them.

But what is that process? If it’s been a while since you took a stats course, here’s a quick refresher. You can put it to use tomorrow when the media uses exit polls to predict election and referendum results before the polls close.

Random Sampling

If I wanted to know how my neighbors were voting in this year’s election, I could simply ask each of them. But surveying the population of an entire state — or all of the more than 200 million eligible voters in the U.S. — is downright impossible. So political pollsters depend on a tried-and-true method of gathering reliable information: random sampling.

A random sample does give a good snapshot of a population — but it may seem a bit mysterious. There are two obvious parts: random and sample.

The amazing thing about a sample is this: when it’s done properly (and I’ll get to that in a minute) the sample does accurately represent the entire population. The most common analogy is the basic blood draw. I’ve got a wonky thyroid, so several times a year, I need to check to see that my medication is keeping me healthy, which is determined by a quick look at my blood. Does the phlebotomist take all of my blood? Nope. Just a sample is enough to make the diagnosis.

The same thing is true with population samples. And in fact, there’s a magic number that works well enough for most situations: 1,000. (This is probably the hardest thing to believe, but it’s true!) For the most part, researchers are happy with a 95% confidence interval and a ±3% margin of error. This means that the results can be trusted with 95% accuracy, but only outside ±3% of the results. (More on that later.) According to the math, to reach this confidence level, only 1,000 respondents are necessary.

So we’re looking at surveying at least 1,000 people, right? But it’s not good enough to go door-to-door in one neighborhood to find these people. The next important feature is randomness.

If you put your hand in a jar full of marbles and pull one marble out, you’ve randomly selected that marble. That’s the task that pollsters have when choosing people to respond to their questions. And it’s not as hard as you might think.

Let’s take exit polls on Election Day. These are short surveys conducted at the voting polls themselves. As people exit the polling place, pollsters stop certain voters to ask a series of questions. The answers to these questions can predict how the election will end up and what influenced voters to vote a certain way.

The enemy of good polling is homogeneity. If only senior citizens who live in wealthy areas of a state are polled, well, the results will not be reliable. But randomness irons all of this out.

First, the polling place must be random. Imagine writing down the locations of all of the polling places in your state on little strips of paper. Then put all of these papers into a bowl, reach in and choose one. That’s the basic process, though this is done with computer programs now.

Then the polling times must be well represented. If a pollster only surveys people who voted in the morning, the results could be skewed to people who vote on their way home from their night-shift or don’t work at all or who are early risers, right? So, care is made to survey people at all times of the day.

And finally, it’s important to randomly select people to interview. Most often, this can be done by simply approaching every third voter who exits the polling place (or every other voter or every fifth voter; you get my drift).

Questions

But the questions being asked — or I should say the ways in which the questions are asked — are at least as important. These should not be “leading questions,” or queries that might prompt a particular response. Here’s an example:

Same-sex marriage is threatening to undermine religious liberty in our country. How do you plan to vote on Question 6, which legalizes same-sex marriage in the state?

(It’s easier to write a leading question asking for intent rather than a leading exit poll.)

Questions must be worded so that they illicit the most reliable responses. When they are confused or leading, the results cannot be trusted. Simplicity is almost always the best policy here.

Interpreting the Data

It’s not enough to just collect information. No survey results are 100 percent reliable 100 percent of the time. In fact, there are “disclaimers” for every single survey result. First of all, there’s a confidence level, which is generally 95%. This means exactly what you might think: Based on the sample size, we can be 95 percent confident that the results are accurate. Specifically, a 95% confidence interval covers 95 percent of the normal (or bell-shaped) curve.

The larger the random sample, the greater the confidence level or interval. The smaller the sample, the smaller the confidence level or interval. And the same is true for the margin of error.

But why 95%? The answer has to do with standard deviation or how much variation (deviation) there is from the mean or average of the data. When the data is normalized (or follows the normal or bell curve), 95% is plus or minus two standard deviations from the mean.

This isn’t the same thing as the margin of error, which represents the range of possibly incorrect results.

Let’s say exit polls show that Governor Romney is leading President Obama in Ohio by 2.5 percentage points. If the margin of error is 3%, Romney’s lead is within the margin of error. And therefore, the results are really a statistical tie. However, if he’s leading by 8 percentage points, it’s more likely the results are showing a true majority.

Of course, all of that depends — heavily — on the sampling and questions. If either or both of those are suspect, it doesn’t matter what the polling shows. We cannot trust the numbers. Unfortunately, we often don’t know how the samples were created or the questions were asked. Reliable statistics will include that information somewhere. And of course, you should only trust stats from sources that you can trust.

Summary

In short, there are three critical numbers in the most reliable survey results:

  • 1,000 (sample size)
  • 95% (confidence interval or level)
  • ±3% (margin of error)

Look for these in the exit polling you hear about tomorrow. Compare the exit polls with the actual election results. Which polls turned out to be most reliable?

I’m not a statistician, but in my math books, you’ll learn math that you can apply to your everyday lives and help you understand polls and other such things.

P.S. I hope every single one of my U.S. readers (who are registered voters) will participate in our democratic process. Please don’t throw away your right to elect the people who make decisions on your behalf. VOTE!

For many folks along the East Coast, Halloween will (at the very least) be postponed, thanks to the very real terror of Super Storm Sandy. I know all of us keep these folks in their thoughts.

And the rest of us? For the most part, tonight marks a very strange annual tradition here in the U.S.: going door to door in costume, asking for free candy. To mark the occasion, I’ve collected some scary statistics about the night of tricks and treats. Read at your own risk! Bwa-ha-ha-ha! (Um… that’s my attempt at an evil laugh.)

170 million: The number of people who plan to celebrate Halloween in the U.S. (National Retail Federation)

$79.82: The average spent on costumes, decorations and candy this year. (National Retail Federation)

$113 million: The total value of pumpkin crops in the three top pumpkin-producing states (Agricultural Marketing Resource Center)

1,818: Number of pounds weighed by the largest pumpkin on record. (Guinness World Records)

15.2: The percent of costume ideas that come from Facebook. (National Retail Federation)

15.1: The percent of people that will dress their pet in a costume. (National Retail Federation)

0: The percent of pets that enjoy this tradition. (Just a guess)

6: Number of times I went trick-or-treating as a “hobo,” because I was too lazy to do much else. (Personal data)

268: The population of Skull Creek, Nebraska — named for “A LOT” of buffalo skulls and bones found in a nearby creek. (U.S. Census)

1690: The number of pieces of candy that will fill an average-sized pillow case. (www.myscienceproject.org)

41: The percent of adults who admit eating candy from their own candy bowl between trick-or-treaters. (National Candy Association)

90: The percent of parents who admit stealing from their kids’ trick-or-treat stash. (National Candy Association)

99.9: The percent of parents who actually steal candy from their kids’ trick-or-treat stash. (Just a guess)

30: The percent of kids who sort their candy before digging in. (National Candy Association)

0: Number of kids who would rather get a toothbrush than candy, while trick or treating. (Just a guess)

Happy Halloween, everyone! Just one last word of warning: Watch out for the zombies. (Here’s how math can help you plan during a zombie apocalypse.)

What are your Halloween plans?

It’s been a rough year for the U.S. economy and workforce. No matter what your political stripe, there’s no sugar coating the numbers: unemployment is still high and people around the country are struggling. In honor of Labor Day, we’ll look at the numbers behind this news.

Once a month, the Bureau of Labor and Statistics releases its employment data, and here are some interesting numbers from July 2012. (August 2012 data will be released on September 7, 2012.) Remember, this is just raw data. The numbers are important, but they can’t really tell the story behind the country’s (or a portion of the population’s) economic and employment situation. People will interpret this information differently, based on their ideologies and personal philosophies. (Politicians will interpret this data based on who they want to attract to the voting booth.)

155.013 million: The number of people in the workforce (16 years and older).

47.8: Percent of women in private workforce

82.6: Percent of women in total production and non-supervisory positions.

34.5: Average weekly hours worked for all employees.

33.7: Average weekly hours worked for all production and non-supervisory positions.

$23.52: The average hourly earnings for all employees.

$19.77: The average hourly earnings for all employees in production and non-supervisory positions.

11.472 million: Number of people in the workforce with less than a high school diploma or equivalent.

37.047 million: Number of people in the workforce with a high school diploma or equivalent.

37.398 million: Number of people in the workforce with some college or an associates degree.

47.697 million: Number of people in the workforce with a bachelor’s degree or higher.

9.616 million: Number of self-employed workers (including agriculture workers).

8.246 million: Number of people who are working part time (one to 34 hours a week), for economic reasons.

6.9: Unemployment rate* for all veterans.

8.9: Unemployment rate for all Gulf War II-era veterans.

12.4: Unemployment rate for all Gulf War II-Era veterans in the previous month (June 2012).

8.3: Unemployment rate for all non-veterans (18 years and older).

18.866 million: Number of people who are working part time (one to 34 hours a week), for other reasons (including childcare problems, school, training or family or personal reasons).

2.711: Number of people who have been unemployed for less than 5 weeks.

3.092 million: Number of people who have been unemployed for 5 to 14 weeks.

6.945 million: Number of people who have been unemployed for more than 15 weeks.

38.8: Average duration of unemployment in weeks.

*The unemployment rate is the percentage of the workforce that is unemployed at any given date.

Based on these numbers, what do you think about the current economy? What kinds of questions do these numbers raise? Are there other numbers that you would like to see? How does this data inform you as a voter? (Don’t worry, we won’t get into big political discussions here. I promise.)

Math Appreciation Month has finally come to a close. And I thought I would end with some math that could save your life. This is serious — and I think really interesting — stuff.

If you’re seen a recent “best college degrees” list, you probably wondered two things: Why the heck is Applied Mathematics on the list, and what is it? First off, applied mathematics is not about crunching numbers. Instead, these folks use higher level mathematics — from abstract algebra to differential equations to statistics — to solve a myriad of problems in a myriad of industries. And that, my friends, is why it’s on the list. In industries like energy, cell phone technology and medicine, math modeling and statistical analysis have been applied to solve really big problems.

Math modeling is one branch of this field that has become a very big deal. Let’s say a city planner wants to know how many snow plows to buy so that the city isn’t paralyzed by a winter storm. Modeling this problem using mathematics is one way to address this problem. The way I look at it, math modeling helps us understand things we can’t see — because they’re part of situations that haven’t occurred or are too far away or are too tiny and hidden.

That too tiny and hidden part that is what math modelers are honing in on with medicine. In this field — sometimes called bioinformatics or computational biology — mathematicians help medical professionals address problems that are under the skin. Here are two examples:

Fighting Cancer: Researchers at University of Miami (UM) and University of Heidelberg in Germany have created a math model that will help oncologists predict how a tumor will grow, and even if and how it will metastasize. There have been other math models that look at tumors, but this one is different. Instead of looking at each cell or all of the cells has a big group, this model creates a kind of patchwork quilt of areas of the tumor to examine. As a result, the doctor can create a tailored plan for treating the disease that is very specific for each patient. The promise is that with specialized (rather than generalized) treatment plans will offer patients a better chance at survival.

Treating Acetaminophen OverdosesWhen a patient comes into the emergency room having overdosed on acetaminophen, the ER staff is faced with a really complex decision. Often these patients are hallucinating, unconscious or comatose. And since it’s relatively easy to overdose on the drug (it takes only five times the daily safe dosage, and acetaminophen is in many different over-the-counter and prescription medications), it’s sometimes impossible to determine when and how much of the drug was ingested. There is an antidote, but at a certain point, the doctor needs to skip that step and put the patient on the liver transplant list immediately. The trick is accurately identifying that point. University of Utah mathematician, Fred Adler, developed a set of differential equations that can better pinpoint the critical information needed to make these decisions.

In both of these cases, the math is pretty darned complicated, depending on a branch of calculus called differential equations. This approach is a step up from statistical analysis, which compares patient data to data collected from other patients. In other words, it assumes that tumors grow in the same way in all patients — which we know isn’t true. These dynamical math approaches allow doctors to offer treatments that are customized for each patient, based only on the information collected from the patient.

And the best part is that the doctors don’t have to know the math. If future studies bear out these new discoveries, a simple app can be designed for smart phones or tablets, allowing physicians to make diagnoses and treatment plans bedside.

I suspect these applications will continue to grow, as the medical community turns to mathematicians for insight into what we can’t see. That’s great news, because these advances can save lives.

I hope you’ve enjoyed what we’ve put together here for Math Appreciation Month. If you have questions, please ask them below. I’m always open to ides for future blog posts, so please share them!

Photo courtesy of Pinti

January 2012 seems extra long!  In fact, there are five — count ’em, five — Mondays in this month.  And while I’ve never missed a Math at Work Monday, I decided to take a break this week.  (Want to read up on previous Q&As for this month? Check them out:  Robert the exercise physiologist, Janine the professional organizer, Jameel the budget counselor and Kiki the career coach.)

This month has been all about New Year’s Resolutions: getting in shape and getting organized, boning up on budget basics and becoming your own boss.  But what are our chances of actually succeeding in any (or all) of these things?  Once again, I ask you: let’s look at the math.

According to a 2008 survey conducted by author and motivational speaker, Steven Shapiro and the Opinion Research Group (Princeton, NJ), 45 percent of Americans set New Year’s Resolutions, but only 8 percent of these reach their goals each year and 24 percent say they never keep their resolutions.

(Disclaimer: I really can’t vouch for the veracity of this study, because I can’t find the data.  But let’s go with it, just to prove my point.  The numbers aren’t really all that important.)

How many of you read those statistics and thought: “Well, there’s no point in even making resolutions! With chances like those, I’m doomed to fail!”

Here’s the good news: If you nodded your head, you are not alone.  And here’s the better news: Statistics don’t work that way.

It’s easy to look at stats and think that they must be true and must apply to everyone in every situation.  Cold, hard numbers don’t lie, right?  Maybe the numbers don’t lie, but it sure is tempting to use those numbers to describe something that isn’t true.  (Politicians do it all the time.)

There are a couple of ways to describe this particular fallacy.  But I think one of most important is to consider what is known as independent events.  See, each person who sets a New Year’s Resolution is independent of all of the other people who do the same thing.  (Even if you’re all making the same resolutions.)

And it gets even trickier.  Each year that you set a resolution is independent, and each resolution that you set is also — you guessed it — independent.

In other words, your success probably doesn’t have much of anything to do with how well others have followed through on their yearly goals — or even how well you’ve done in years past.  (I say probably, because you may be one of those folks who is easily influenced by statistics.  In other words, you may decide that you cannot succeed in meeting your resolutions, simply because you read somewhere that most people don’t.)

There’s tons of research out there on why people make resolutions and how they can be successful in them. If you looked at this research and determined that you have many of the same obstacles, maybe — just maybe — you could predict your chance of success.

But simply because many other people aren’t successful doesn’t mean you are automatically doomed to fail. Independence is only one reason for this.  Randomness is another.

Dice are random, but people aren’t.  (In fact, I saw a great video that demonstrates this last week.  Of course I can’t put my hands on it now, but I’ll post a link, if I do.)  While a (fair) die only has to worry about gravity, we have many more things that influence our behavior, decisions and more.  That doesn’t mean that people aren’t more likely to act a certain way under certain conditions.  But it certainly does mean that your New Year’s resolutions are not beholden to statistics.

So, the next time you read an article about the low rate of success with New Year’s Resolutions, remember this: You certainly can succeed — even if you failed last year. And if you are philosophically opposed to New Year’s resolutions, you’ll need a better reason than most people don’t keep them.

How are you doing with your New Year’s Resolutions?  Share in the comments section.  And come back on Wednesday — I’ll reveal how things are going with me!Save

I’d like to welcome my first guest poster here atMath for Grownups, Carole Moore.  Carole is a fellow writer and the author ofThe Last Place You’d Look: True Stores of Missing Persons and the People Who Look for Them, which hit bookstores in May.  Her book is a gripping account of a variety of missing persons cases around the country.  A former police detective, Carole knows her stuff.

Carole Moore’s most recent book.

She also knows how darned scary missing-persons statistics can be.  And so she’s offered to take a closer look at these numbers and what story they really tell.  This is a critical way that we can use math without even being aware.  See, as scared of math as many of us are, we may also be inclined to trust numbers.  Unfortunately, without some perspective and context, numbers don’t mean a thing.  Keep reading…

When it comes to crime, statistics can be misleading. The truth is in how you break down the numbers. Let’s look at one example:  According to the U.S. Department of Justice, 797,500 children under the age of 18 were reported missing in one year’s time. That’s an average of 2,185 kids per day. What’s more interesting is what those numbers don’t say:

First, the category of the report from which they’re drawn (NISMART-2) specifies “reported” missing. That means that some kids who disappeared in the same time bracket were not reported within the reporting period. It doesn’t necessarily mean they weren’t reported at all – although many aren’t. Illegal immigrants often won’t call police out of fear of reprisals, and the children of the mentally ill, transients, the homeless, prostitutes and drug users, as well as foster kids, often escape the count. So, while the figure 797,500 sounds huge, the actual number of missing children in a year well exceeds “reported” missing.

Now, look a little closer at those numbers, starting with family abductions, which account for 203,900 children reported missing, and 58,200 kids classified as non-family abductions. That leaves 535,400 children unaccounted for – of these children only 115 were considered “stereotypical” kidnappings. (Examples of stereotypical kidnappings are usually extreme and include cases such as those of Jamie Duggard and Adam Walsh.) The remaining 535,285 children fit in none of these specific categories.

The children left are grouped miscellaneously. For example, a child reported missing after stopping at a friend’s house following school (and who didn’t notify a parent or caretaker) would now be a reported missing child for statistical purposes. So would a child who becomes lost or hides out whose disappearance is reported – even if the child is really not missing in the truest sense of the word, they would be classified as “reported missing.”

My point is that while the statistics here don’t lie, they also don’t tell the whole story in and of themselves.  Many missing children are never reported missing, while many of the reported missing really aren’t missing at all. To truly understand crime stats, it’s important to dig deeper than the numbers.

Carole Moore is a former police detective and current freelance writer, as well as contributing editor and columnist at Law Enforcement Technology.  You can learn more about her atwww.carolemoore.com.

Do you have questions about crime statistics?  Ask them in the comments section!

If you’ve ever visited the website of a prescription medication or picked up a brochure from your doctor’s office, you’ve seen the kind of work that Kim Hooper does.  And she’s proof that math and writing are not mutually exclusive endeavors.

As a senior copywriter for an advertising agency, Kim writes brochures, websites and other copy that helps promote a brand or a product.  Since her agency’s primary client is a pharmaceutical company, much of her writing is science-based.

When do you use basic math in your job?

Much of my job involves scanning through research papers about specific drugs and interpreting clinical data in a “sexy,” Madison Avenue way. This tends to involve a bit of math. For example, let’s say we want to point out that our drug is really successful with women over 40 years old. I will look through the demographic tables in the clinical study to create a compelling factoid. Let’s also say that out of 100 women, 60 are over 40 years old. So, when writing a piece, I may have a big headline that says something like, “60% of women in the clinical study were over 40 years old.”

Most of the math I do involves basic addition or subtraction and percentage calculations. Very often, I’ll do percentage calculations for side-effects data. So if 3 patients out of 150 in the clinical study experienced side effects, I’ll take this fact and make sure to call out that 98% of patients did not experience side effects.

Do you use any technology (like calculators or computers) to help with this math?

I do use the calculator built into my PC to double check my work. But I almost always have to do “margin math,” meaning I show my calculations on paper so the client’s regulatory committee can review them.

How do you think math helps you do your job better?

Math keeps my left brain strong. In advertising, the right brain is very important. This is a creative business. We’re trying to find interesting, compelling ways to communicate product messages that may not be that thrilling at first glance. My left brain can help make the messages thrilling. Numbers are very appealing to consumers. If they can see information broken down into easy-to-understand percentages, for example, they may be more likely to try our medication over another one.

How comfortable are you with math?

I’ve always been a bit of a math nerd, and I went all the way through Advanced Placement Calculus in high school. In fact, it was really difficult for me to choose a major in college because I loved math and science and I also loved the arts. For a short time, I double-majored in genetics and psychology. I ended up majoring in communications, which seemed broad enough for me to explore a number of career options. I just happened to fall into a career that makes use of both sides of my brain, which I love. I really enjoy sifting through data and doing the math necessary to make facts come to life.

I think we all get a little rusty if we don’t use math regularly, but it’s been part of my job for a number of years now. There’s no way I could do calculus again, but I have no problem doing basic math. I enjoy it.

Kim Hooper is an advertising copywriter by day, novelist by night. Get to know her work at KimHooperWrites.com.

Do you have questions for Kim?  If so, ask them in the comments section!

Wondering how you (or someone you love) is going to survive without the Daytime Diva?  Here’s a selection of stats about her show, from SheKnows:

  1. About 1.3 million people came to see her show over the last 25 seasons.
  2. Most frequent female guest?  Celine Dion with 27 appearances.
  3. Most frequent male guest?  Chris Rock with 25 appearances.  (That’s not counting Dr. Phil, who came on the show a whopping 118 times!)
  4. Total number of cars given away: 570

Read the rest at SheKnows!