Rey Writer Only
justme87Thomson Reuters is the largest business-to-business information services company in the world. Some years ago, it began to reinvent itself by learning more about clients who use its products and services: tax professionals, investment managers, brokers, lawyers, accountants, financial analysts, and researchers. The company had been relying on “third-party” reports for market information. However, the reports provided few, if any, clues about actual users who benefited (or not) from its products.
Thomson began by identifying eight market segments and exploring each of them in greater detail. It employed both quantitative (surveys) and qualitative (interviews) research methods to collect information. Thomson even filmed users as they performed their job duties.
The comprehensive approach to research paid off. One of the insights to emerge from following financial analysts around was that they spent extra time creating Excel spreadsheets based on Thomson’s financial data. Thomson used this finding to incorporate an exporting facility to Excel in the product.
After thoroughly investigating the eight market segments, Thomson created a list of product attributes that needed improvement to yield better customer satisfaction. On the basis of its exhaustive market research, Thomson redesigned its product portfolio, beginning with what could be done most easily and moving to advanced features for products that could be sold at higher prices. Today, nearly 70% of Thomson’s products have undergone improvements based on its user-oriented market research process, adding to both the company’s bottom line and customer satisfaction. More importantly, perhaps, the rigorous user-oriented researching and reporting culture has spread throughout the organization.
“ The rigorous user-oriented researching and reporting culture has spread throughout the organization.”
Source: Richard J. Harrington and Anthony K. Tjan, ”Transforming Strategy One Customer at a Time,” Harvard Business Review 86, no. 3 (March 2008): 62–72. Reprinted by permission of Harvard Business Review. Excerpt from “Transforming Strategy One Customer at a Time,” by March 2008. Copyright 2008 by the Harvard Business School Publishing Corporation; all rights reserved.
Page 528Learning Objectives
After studying this chapter, you will know how to:
Define report problems.
Employ different research strategies.
Use and document sources.
Reports depend on research. The research may be as simple as pulling up data with a computer program or as complicated as calling many different people, conducting focus groups and surveys, or even planning and conducting experiments. Care in planning, proposing, and researching reports is needed to produce reliable data.
In writing any report, there are five basic steps:
1. Define the problem.
2. Gather the necessary data and information.
3. Analyze the data and information.
4. Organize the information.
5. Write the report.
After reviewing the varieties of reports, this chapter focuses on the first two steps. Chapter 19 discusses the last three steps.
Varieties of Reports
Many kinds of documents are called reports. In some organizations, a report is a long document or one that contains numerical data. In others, one- and two-page memos are called reports. In still others, reports consist of PowerPoint slides delivered orally or printed and bound together. A short report to a client may use letter format. Formal reports contain formal elements such as a title page, a transmittal, a table of contents, and a list of illustrations. Informal reports may be letters and memos or even computer printouts of production or sales figures. But all reports, whatever their length or degree of formality, provide the information that people in organizations need to make plans and solve problems.
Reports can provide just information, both information and analysis alone, or information and analysis to support a recommendation (see Figure 17.1). Reports can be called information reports if they collect data for the reader, analytical reports if they interpret data but do not recommend action, and recommendation reports if they recommend action or a solution.
The following reports can be information, analytical, or recommendation reports, depending on what they provide:
Accident reports can simply list the nature and causes of accidents in a factory or office. These reports can also analyze the data and recommend ways to make conditions safer.
Credit reports can simply summarize an applicant’s income and other credit obligations. These reports can also evaluate the applicant’s collateral and creditworthiness and recommend whether or not to provide credit.
Progress and interim reports can simply record the work done so far and the work remaining on a project. These reports can also analyze the quality of the work and recommend that a project be stopped, continued, or restructured.
Trip reports can simply share what the author learned at a conference or during a visit to a customer or supplier. These reports can also recommend action based on that information.
Closure reports can simply document the causes of a failure or possible products that are not economically or technically feasible under current conditions. They can also recommend action to prevent such failures in the future.
Page 529Figure 17.1 Three Levels of Reports
Always the Same, Always Different
Jane Garrard, the vice president for investor and media relations at Tupperware Brands Corporation is responsible for producing an earnings release statement for the company Web site every quarter. Every quarter that task requires the same steps. She gathers facts and data about Tupperware’s business and earnings; summarizes that information for Tupperware’s shareholders and investors; produces the tables, graphs, and balance sheets that are part of the standard earnings release “boilerplate,” or template; and circulates the document for review. That’s a writing process, and it’s the same every quarter.
But it’s also a communication process, one that’s different every quarter. Every quarter brings different numbers, different directions for the company, different market conditions with different investor expectations. Every quarter, it’s a new message, requiring a new approach and new interpretations. Garrard and her team must decide the best ways to balance their responsibility to Tupperware’s customers with regulatory requirements and company interests. They have to interpret the data, analyze their audience, consult with their coworkers, and then compose an earnings release statement that meets everyone’s goals. That’s a challenging job.
Adapted from Assaf Kadem, “Facts and Interpretation,” Communication World, December 2006, 30.
The Report Production Process
When you write a report, you know the actual writing will take a significant chunk of time. But you should also plan to spend significant time analyzing your data, revising drafts, and preparing visuals and slides.
When you write a report for a class project, plan to complete at least one-fourth of your research before you write the proposal. Begin analyzing your data as you collect it; prepare your list of sources and drafts of visuals as you go along. Start writing your first draft before the research is completed. An early draft can help clarify where you need more research. Save at least one-fourth of your time at the end of the project to think and write after all your data are collected. For a collaborative report, you’ll need even more time to write and revise.
Up-front planning helps you use your time efficiently. Start by thinking about the whole report process. Talk to your readers to understand how much detail and formality they want. Look at reports that were produced earlier (sample reports in this text are in Chapter 19). List all the parts of the report you’ll need to prepare. Then articulate—to yourself or your team members—the purposes, Page 530audiences, and generic constraints for each part. The fuller idea you have of the final product when you start, the fewer drafts you’ll need to write and the better your final product will be.
Research for Developing Countries
Procter & Gamble researchers found that 60% of shoppers at tiny stores in developing countries already know what they want, so they do not spend time browsing. But they do gaze at the cashier’s area for 5 seconds as they wait for their purchase or change. So P&G is thinking of ways to persuade store owners to put more P&G products in these areas.
Because running water is in short supply for many low-income Mexican consumers, P&G researchers developed a fabric softener that, when added to the laundry load along with the detergent, can eliminate a rinse cycle in the kinds of washing machines being used.
To keep prices down for these customers, P&G employs “reverse engineering.” It starts with the price consumers can afford for the product and then adjusts features and manufacturing accordingly. To hold down the cost of a detergent used for hand washing clothes, P&G used fewer enzymes. The result was a cheaper product, and one that was gentler on the hands than the regular detergent.
Adapted from Ellen Byron, “P&G’s Global Target: Shelves of Tiny Stores: It Woos Poor Women Buying Single Portions; Mexico’s ‘Hot Zones,’” Wall Street Journal, July 16, 2007, A1.
Report Topics
Good reports grow out of real problems: disjunctions between reality and the ideal, choices that must be made. When you write a report as part of your job, the organization may define the topic. To think of problems for class reports, think about problems that face your college or university; housing units on campus; social, religious, and professional groups on campus and in your city; local businesses; and city, county, state, and federal governments and their agencies. Read your campus and local papers and newsmagazines; read the news on the Internet, watch it on TV, or listen to it on National Public Radio.
A good report problem in business or administration meets the following criteria:
1. The problem is
Real.
Important enough to be worth solving.
Narrow but challenging.
2. The audience for the report is
Real.
Able to implement the recommended action.
3. The data, evidence, and facts are
Sufficient to document the severity of the problem.
Sufficient to prove that the recommendation will solve the problem.
Available to you.
Comprehensible to you.
Often problems need to be narrowed. For example, “improving the college experiences of international students studying in the United States” is far too broad. First, choose one college or university. Second, identify the specific problem. Do you want to increase the social interaction between US and international students? Help international students find housing? Increase the number of ethnic grocery stores and restaurants? Third, identify the specific audience that would have the power to implement your recommendations. Depending on the specific topic, the audience might be the Office of International Studies, the residence hall counselors, a service organization on campus or in town, a store, or a group of investors.
Some problems are more easily researched than others. If you have easy access to the Chinese Student Association, you can survey them about their experiences at the local Chinese grocery. However, if you want to recommend ways to keep the Chinese grocery in business, but you do not have access to their financial records, you will have a much more difficult time solving the problem. Even if you have access, if the records are written in Chinese, you will have problems unless you read the language or have a willing translator.
Pick a problem you can solve in the time available. Six months of full-time (and overtime) work and a team of colleagues might allow you to look at all the ways to make a store more profitable. If you’re doing a report in 6 to 12 weeks for a class that is only one of your responsibilities, limit the topic. Depending on your interests and knowledge, you could choose to examine the prices and brands carried, its inventory procedures, its overhead costs, its layout and decor, or its advertising budget.
Page 531
Hasbro does extensive research to keep developing games people will play.
Research and Innovation: Fun and Games at Hasbro
On Fridays, employees at Hasbro spend their lunchtime playing board games and thinking about ways to update games or create new ones. The Friday games are just one of the creative approaches to research and innovation used at the company that manufactures some of America’s best-known board games, such as Monopoly, Scrabble, Sorry, and Clue.
In the world of board games, continuous innovation is necessary to fit games to changing consumer lifestyles and preferences. Hasbro invests in extensive market research, such as conducting online surveys, observing children and adults playing games in the company’s Game-Works lab, and talking with people about how they want to spend leisure time.
In response to information obtained through these strategies, Hasbro has modified several of its traditional games.
To accommodate consumers’ tight schedules, Hasbro developed “express” versions of Monopoly, Sorry, and Scrabble that can be completed within 20 minutes.
To address consumers’ desire for more balanced lives, The Game of Life now includes life experience, education, and family life as elements of a successful life, rather than basing success only on making the most money.
Based on 3 million votes cast in an online survey, a revised version of Monopoly replaces Boardwalk with Times Square and Pacific Avenue with Las Vegas Boulevard.
To attract customers who enjoy using technology to play games, game designers developed electronic versions of games.
Adapted from Carol Hymowitz, “All Companies Need Innovation: Hasbro Finds a New Magic,” Wall Street Journal, February 26, 2007, B1.
How you define the problem shapes the solutions you find. For example, suppose that a manufacturer of frozen foods isn’t making money. If the problem is defined as a marketing problem, the researcher may analyze the product’s price, image, advertising, and position in the market. But perhaps the problem is really that overhead costs are too high due to poor inventory management, or that an inadequate distribution system doesn’t get the product to its target market. Defining the problem accurately is essential to finding an effective solution.
Once you’ve defined your problem, you’re ready to write a purpose statement. The purpose statement goes both in your proposal and in your final report. A good purpose statement makes three things clear:
The organizational problem or conflict.
The specific technical questions that must be answered to solve the problem.
The rhetorical purpose (to explain, to recommend, to request, to propose) the report is designed to achieve.
The following purpose statement has all three elements:
Current management methods keep the elk population within the carrying capacity of the habitat but require frequent human intervention. Both wildlife conservation specialists and the public would prefer methods that controlled the elk population naturally. This report will compare the current short-term management techniques (hunting, trapping and transporting, and winter feeding) with two long-term management techniques, habitat modification and the reintroduction of predators. The purpose of this report is to recommend which techniques or combination of techniques would best satisfy the needs of conservationists, hunters, and the public.
Report audience: The superintendent of Yellowstone National Park
To write a good purpose statement, you must understand the basic problem and have some idea of the questions that your report will answer. Note, however, that you can (and should) write the purpose statement before researching the specific alternatives the report will discuss.
Research Strategies for Reports
Research for a report may be as simple as getting a computer printout of sales for the last month; it may involve finding published material or surveying or interviewing people. Secondary research retrieves information that someone else gathered. Library research and online searches are the best known kinds of secondary research. Primary research gathers new information. Surveys, interviews, and observations are common methods for gathering new information for business reports.
Market Research Helps Boost Wal-Mart’s Bottom Line
Market research helped Wal-Mart find its groove again. After imitating rival Target’s emphasis on fashion and cultivating an upscale image, Wal-Mart realized that imitation had failed to revive its stagnant sales and growing unfavorable public image. In stepped Stephen Quinn, formerly of PepsiCo, who told Wal-Mart bosses that the retail giant needed to do what it had seldom done: marketing research.
Among its findings, Wal-Mart discovered that its lower-income pharmacy customers often split their pills in half. This little factoid became the idea behind Wal-Mart’s tremendously successful $4 prescription drug plan. Above all, Wal-Mart found that its most profitable customers were also its most price conscious. Thus, rather than imitating Target, it needed to refocus on being the most cost-effective supplier.
Apart from its efficient overall operations, analysts believe that marketing research helped Wal-Mart achieve a robust sales growth resulting in an increase of 32% in the price of its stock since 2007.
Adapted from Suzanne Kapner, ”Wal-Mart Enters the Ad Age,” in Fortune: Fortune 500, http://money.cnn.com/2008/08/13/news/companies/Walmart_enters_ad_age_kapner.fortune/index.htm (accessed April 10, 2009).
Finding Information Online and in Print
You can save time and money by checking online and published sources of data before you gather new information. Many college and university libraries provide
Workshops on research techniques.
Handouts explaining how to use printed and computer-based sources.
Free or inexpensive access to computer databases.
Research librarians who can help you find and use sources.
Categories of sources that may be useful include
Specialized encyclopedias for introductions to a topic.
Indexes to find articles. Most permit searches by keyword, by author, and often by company name.
Abstracts for brief descriptions or summaries of articles. Sometimes the abstract will be all you’ll need; almost always, you can tell from the abstract whether an article is useful for your needs.
Citation indexes to find materials that cite previous research. Citation indexes thus enable you to use an older reference to find newer articles on the topic. The Social Sciences Citation Index is the most useful for researching business topics.
Good research uses multiple media and sources.
Page 533
Newspapers for information about recent events.
US Census reports, for a variety of business and demographic information.
To use a computer database efficiently, identify the concepts you’re interested in and choose keywords that will help you find relevant sources. Keywords are the terms that the computer searches for. If you’re not sure what terms to use, check the ABI/Inform Thesaurus for synonyms and the hierarchies in which information is arranged in various databases.
Specific commands allow you to narrow your search. For example, to study the effect of the minimum wage on employment in the restaurant industry, you might use a Boolean search (see Figure 17.2):
(minimum wage) and(restaurant or fast food) and (employment rate or unemployment).
This descriptor would give you the titles of articles that treat all three of the topics in parentheses. Without and, you’d get articles that discuss the minimum wage in general, articles about every aspect of restaurants, and every article that refers to unemployment, even though many of these would not be relevant to your topic. The or descriptor calls up articles that use the term fast food or the term restaurant. Many Web search engines, including AltaVista and Google, allow you to specify words that cannot appear in a source.
Many words can appear in related forms. To catch all of them, use the database’s wild card or truncated code for shortened terms and root words. To find this feature and others, go to the Advanced Search screen for the search engine you are using. Search engines vary in the symbols they use for searches, so be sure to check for accurate directions.
Web Searching: Insider Information
“You want to know how fast an F-16 flies, but your search engine spits back details on camera shutter speeds. To save yourself these detours, take a page from the book of Sergey Brin, co-founder of the search-engine Google: Think more like a programmer and ask yourself: What words do I want to come back in my response?
“When Mr. Brin, who watches his waistline, wants to know how much protein is in a serving of chicken breast, he doesn’t just type in ‘chicken breast’ and ‘protein.’ He adds the word ‘grams.’
“Another of his winnowing tricks is to use the minus sign. To ensure that a search for ‘dolphins’ doesn’t bring up a slew of references to the Miami Dolphins football team, for instance, type dolphin -miami. (A space must be added before the minus sign, and the minus sign must go directly before the word to be removed.)
“Mr. Brin has another secret. For local addresses and phone numbers, he uses Yahoo.”
Quoted from “Tricks of the Trade: Web Searching with a Pro,” Wall Street Journal, June 5, 2002, D1. Copyright © 2002 by Dow Jones & Company, Inc. Reproduced with permission of Dow Jones & Company, Inc. via Copyright Clearance Center.
Figure 17.2 Example of a Boolean Search
Page 534Web search engines are particularly effective for words, people, or phrases that are unlikely to have separate pages devoted to them. For general topics or famous people, directories like Yahoo! may be more useful. Figure 17.3 lists a few of the specialized sources available.
Market Research Creates an Ad Campaign
[The tiny company that made Boker knives was considering dropping the brand. But before making the decision, executives and ad agency personnel talked to current and potential customers: people who hunt and fish.]
“They knew they were onto something when they ran into a longtime Boker customer who told them what he liked about the knife. He said that whenever he wants to make French fries, he just opens his Boker, puts it in the glove box of his four-by-four pickup with a bunch of potatoes, and drives over a rocky road in second gear. He added, ‘Of course, for hash browns I use third gear.’
“As they began to collect stories like this, others poured in. The agency ran a print campaign featuring fanciful pictures of old-time Boker users. The title of the ad ran: IN EVERY LIE ABOUT THE BOKER THERE’S A LITTLE BIT OF TRUTH. The ads recounted the tall tales, which guaranteed high readership. Then the ads told the serious part of the Boker message. Within a year, both market share and profits had doubled.”
Quoted from Robert H. Waterman, The Renewal Factor (Toronto: Bantam, 1987), 163.
Evaluating Web Sources
Some of the material on the Web is excellent, but some of it is wholly unreliable. With print sources, the editor or publisher serves as a gatekeeper, so you can trust the material in good journals. To put up a Web page, all one needs is access to a server.
Use four criteria to decide whether a Web site is good enough to use for a research project:
1. Authors. What person or organization sponsors the site? What credentials do the authors have?
2. Objectivity. Does the site give evidence to support its claims? Does it give both sides of controversial issues? Is the tone professional?
3. Information. How complete is the information? What is it based on?
4. Currency. How current is the information?
Answers to these questions may lead you to discard some of the relevant sites you find. For example, if you find five different Web pages about the cell phones and car accidents that all cite the same Toronto study, you have one source, not five. Choose the most complete for your project.
Figure 17.3 Sources for Web Research
Figure 17.3 Sources for Web Research (Concluded)
Page 535
Web Hoaxes
You should not believe everything you read on the Web. Here are a few examples of the kinds of hoaxes that continue to circulate.
The top 10 hoaxes for 2008 still include the one saying Microsoft and AOL will pay you for forwarding “this e-mail” as part of their e-mail beta test. This urban legend has been circulating in its present form since 1999.
Another top 10 hoax (in its fourth year) urges readers to register their cell phone numbers on the Do Not Call Registry to prevent a host of telemarketing calls. (You can do so if you wish, but the directory of cell phone numbers, which telemarketers would need, is still on hold.)
A new one on the 2008 list warns about a postcard from Hallmark with a virus. While this one is false, Urban Legends warns that some e-card notices do carry viruses.
Go to http://urbanlegends.about.com/ and www.scambusters.org for more information about Internet hoaxes and frauds.
Adapted from David Emery, “Top 10 Urban Legends of 2008,” in About.com: Urban Legends: Reference, http://urbanlegends.about.com/od/reference/a/2008_top-ten.htm (accessed April 15, 2009).
Many students start their research with Wikipedia. If you are one of them, you are not alone. Wikipedia is the largest, most popular encyclopedia ever. It has over 10 million articles in 253 languages and is one of the top five Web sites.1 So, while it may be acceptable as a starting place, be aware that many instructors and other professionals do not accept Wikipedia—or any encyclopedia, frequently—as an authoritative source. These are some of their reasons:
Many remember the beginnings of Wikipedia when it was full of errors.
Because not all entries are written by experts on the topic, some entries still contain errors.
Wikipedia makes the news when pranksters maliciously alter entries.
Thanks to WikiScanner, some editors have been shown to have self-interest. For instance, Diebold deleted paragraphs criticizing its electronic voting machines, and PepsiCo deleted paragraphs on negative health effects in the Pepsi entry.2
Because Wikipedia is constantly changing, information you cite may be changed or eliminated if someone goes to check it.
Watch Your Language
In two decades of writing and using surveys, psychologist Palmer Morrel-Samuels has seen that the wording of a survey question can affect responses. For example, the connotation of a phrase can unintentionally skew the way people react to a question. He recalls a survey that a maker of photographic equipment used to learn about the leadership skills of its managers. A question asked employees whether their manager “takes bold strides” and “has a strong grasp” of complicated issues. Male managers tended to outscore female managers. Morrel-Samuels noted that, in a literal sense, males on average take longer strides and have more muscle strength than females. The company changed the wording of the survey. “Has a strong grasp of complex problems” became “discusses complex problems with precision and clarity.” After this change, the difference in ratings of female and male managers disappeared. Employees apparently stopped mixing images of size and strength into their ratings of intellectual insight.
Another word-related bias is that respondents tend to agree more than disagree with statements. If a survey about managers asks employees whether their manager is fair, ethical, intelligent, knowledgeable, and so on, they are likely to assign all of these qualities to the manager—and to agree more and more as the survey goes along. To correct for this, some questions should be worded to generate the opposite response. For example, a statement about ethics can be balanced by a statement about corruption, and a statement about fairness can be balanced by a statement about bias or stereotypes.
Adapted from Palmer Morrel-Samuels, “Getting the Truth into Workplace Surveys,” Harvard Business Review 80, No. 2 (February 2002): 111–18.
Designing Questions for Surveys and Interviews
A survey questions a large group of people, called respondents or subjects. The easiest way to ask many questions is to create a questionnaire, a written list of questions that people fill out. An interview is a structured conversation with someone who will be able to give you useful information. Organizations use surveys and interviews to research both internal issues such as employee satisfaction and external issues such as customer satisfaction. Responding to anger over huge executive pay packages as the recession continued, biotechnology firm Amgen surveyed shareholders about its compensation plan. The survey was a 10-question online survey.3 Figure 17.4 shows some of the mistakes and best practices in marketing survey research.
Figure 17.4 Marketing Research Mistakes and Best Practices
Source: Calvin P. Duncan, Constance M. O’Hare, and John M. Matthews, “Raising Your Market IQ,” Wall Street Journal, December 1, 2007, R4. Copyright © 2004 by Dow Jones & Company, Inc. Reproduced with permission of Dow Jones & Company, Inc. via Copyright Clearance Center.
Page 537Characteristics of Good Survey Questions
Surveys and interviews can be useful only if the questions are well designed. Good questions have these characteristics:
1. They ask only one thing.
2. They are phrased neutrally.
3. They avoid making assumptions about the respondent.
4. They mean the same thing to different people.
At a telecommunications firm, a survey asked employees to rate their manager’s performance at “hiring staff and setting compensation.” Although both tasks are part of the discipline of human resource management, they are different activities. A manager might do a better job of hiring than of setting pay levels, or vice versa. The survey gave respondents—and the company using the survey—no way to distinguish performance on each task.4
Phrase questions in a way that won’t bias the response. In the political sphere, for example, opinions about rights for homosexuals vary according to the way questions are asked. More Americans oppose “allowing gays and lesbians to marry legally” than oppose “legal agreements giving many of the same rights as marriage.” With regard to homosexual relations, the number of people who say such behavior should be “illegal” is greater than the number who say “consenting adults engaged in homosexual activities in private should be prosecuted for a crime.”5
The order in which questions are asked may matter. Asking about the economy—and its impact on families—before asking about the President will lower opinions of the President during bad economic times; the opposite is true for good economic times.6
Avoid questions that make assumptions about your subjects. The question “Does your wife have a job outside the home?” assumes that your respondent is a married man.
Use words that mean the same thing to you and to the respondents. If a question can be interpreted in more than one way, it will be. Words like often and important mean different things to different people. When a consulting firm called Employee Motivation and Performance Assessment helped Duke Energy assess the leadership skills of its managers, an early draft of the employee survey asked employees to rate how well their manager “understands the business and the marketplace.” How would employees know what is in the manager’s mind? Each respondent would have to determine what is reasonable evidence of a manager’s understanding. The question was rephrased to identify behavior the employees could observe: “resolves complaints from customers quickly and thoroughly.” The wording is still subjective (“quickly and thoroughly”), but at least all employees will be measuring the same category of behavior.7
Even questions that call for objective information can be confusing. For example, consider the owner of a small business confronted with the question “How many employees do you have?” Does the number include the owner as well as subordinates? Does it include both full- and part-time employees? Does it include people who have been hired but who haven’t yet started work, or someone who is leaving at the end of the month? A better wording would be
http://www.publicagenda.org/pages/20-questions-journalists-should-ask-about-poll-results
Public Agenda provides 20 questions to ask about poll results. Questions include
• Who did the poll and who paid for it?
• How many people were surveyed and how were they chosen?
• How was the survey done?
• What questions were asked?
Designing survey questions is an important and difficult part of getting valid results. For examples of surveys, including information about their design, visit the Gallup Poll pages of the Gallup Organization’s Web site. The Web site also includes videos of Gallup’s survey work. Some videos discuss the results of particular polls; some also talk about the poll’s audience and purpose, important factors in a survey’s design. Watch several videos and examine several polls for the ways in which audience and purpose shape the questions in the survey.
How many full-time employees were on your payroll the week of May 16?
Page 538As discussed in Chapter 7 (see p. 188), bypassing occurs when two people use the same words or phrases but interpret them differently. To catch questions that can be misunderstood and to reduce bypass ing, avoid terms that are likely to mean different things to different people and pretest your questions with several people who are like those who will fill out the survey. Even a small pretest with 10 people can help you refine your questions.
Kinds of questions
Survey questions can be categorized in several ways.
Closed questions have a limited number of possible responses. Open questions do not lock the subject into any sort of response. Figure 17.5 gives examples of closed and open questions. The second question in Figure 17.5 is an example of a Likert-type scale. Closed questions are faster for subjects to answer and easier for researchers to score. However, since all answers must fit into prechosen categories, they cannot probe the complexities of a subject. You can improve the quality of closed questions by conducting a pretest with open questions to find categories that matter to respondents. Analyzing the responses from open questions is usually less straightforward than analyzing responses from closed questions.
Use an “Other, Please Specify” category when you want the convenience of a closed question but cannot foresee all the possible responses. These responses can be used to improve choices if the survey is to be repeated.
Figure 17.5 Closed and Open Questions
Page 539What is the single most important reason that you ride the bus? ________ I don’t have a car. ________ don’t want to fight rush-hour traffic. ________ Riding the bus is cheaper than driving my car. ________ Riding the bus conserves fuel and reduces pollution. ________ Other (please specify): ––––––
When you use multiple-choice questions, make the answer categories mutually exclusive and exhaustive. This means you make sure that any one answer fits only in one category and that a category is included for all possible answers. In the following example of overlapping categories, a person who worked for a company with exactly 25 employees could check either a or b. The resulting data would be hard to interpret.
Overlapping categories: |
Indicate the number of full-time employees in your company on May 16: –––––– a. 0–25 –––––– b. 25–100 –––––– c. 100–500 –––––– d. over 500 |
Discrete categories: |
Indicate the number of full-time employees on your payroll on May 16: –––––– a. 0–25 –––––– b. 26–100 –––––– c. 101–500 –––––– d. more than 500 |
Branching questions direct different respondents to different parts of the questionnaire based on their answers to earlier questions.
If People Can Misunderstand the Question, They Will
Q: Give previous experience with dates.
A: Moderately successful in the past, but I am now happily married!
Q: How many autopsies have you performed on dead people?
A: All my autopsies have been on dead people.
Q: James stood back and shot Tommy Lee?
A: Yes.
Q: And then Tommy Lee pulled out his gun and shot James in the fracas?
A: (After hesitation) No sir, just above it.
Q: What is the country’s mortality rate?
A: 100%. Everybody dies.
Q: Give numbers of employees broken down by sex.
A: None. Our problem is booze.
Q: Sex?
A: Once a week.
Adapted from James Hartley, Designing Instructional Text (London: Kogan Page, 1978), 109; Richard Lederer, Anguished English (New York: Wyrick, 1988); and surveys of college students.
10. Have you talked to an academic adviser this year? yes no (If “no,” skip to question 14.)
Use closed multiple-choice questions for potentially embarrassing topics. Seeing their own situation listed as one response can help respondents feel that it is acceptable. However, very sensitive issues are perhaps better asked in an interview, where the interviewer can build trust and reveal information about himself or herself to encourage the interviewee to answer.
Generally, put early in the questionnaire questions that will be easy to answer. Put questions that are harder to answer or that people may be less willing to answer (e.g., age and income) near the end of the questionnaire. Even if people choose not to answer such questions, you’ll still have the rest of the survey filled out.
If subjects will fill out the questionnaire themselves, pay careful attention to the physical design of the document. Use indentations and white space effectively; make it easy to mark and score the answers. Label answer scales frequently so respondents remember which end is positive and which is negative. Include a brief statement of purpose if you (or someone else) will not be available to explain the questionnaire or answer questions. Pretest the questionnaire to make sure the directions are clear. One researcher mailed a two-page questionnaire without pretesting it. One-third of the respondents didn’t realize there were questions to answer on the back of the first page.
See Figure 17.6 for an example of a questionnaire for a student report.
Questionnaire for a Student Report Using Survey Research
Page 541Conducting Surveys and Interviews
Face-to-face surveys are convenient when you are surveying a fairly small number of people in a specific location. In a face-to-face survey, the interviewer’s sex, race, and nonverbal cues can bias results. Most people prefer not to say things they think their audience will dislike. For that reason, women will be more likely to agree that sexual harassment is a problem if the interviewer is also a woman. Members of a minority group are more likely to admit that they suffer discrimination if the interviewer is a member of the same minority.
Telephone surveys are popular because they can be closely supervised. Interviewers can read the questions from a computer screen and key in answers as the respondent gives them. The results can then be available just a few minutes after the last call is completed.
Phone surveys also have limitations. First, they reach only people who have phones and thus underrepresent some groups such as poor people. Answering machines, caller ID, and cell phones also make phone surveys more difficult. Since a survey based on a phone book would exclude people with unlisted numbers, professional survey-takers use automatic random-digit dialing.
To increase the response rate for a phone survey, call at a time respondents will find convenient. Avoid calling between 5 and 7 PM, a time when many families have dinner.
Mail surveys can reach anyone who has an address. Some people may be more willing to fill out an anonymous questionnaire than to give sensitive information to a stranger over the phone. However, mail surveys are not effective for respondents who don’t read and write well. Further, it may be more difficult to get a response from someone who doesn’t care about the survey or who sees the mailing as junk mail. Over the phone, the interviewer can try to persuade the subject to participate.
Online surveys deliver questions over the Internet. The researcher can contact respondents with e-mail containing a link to a Web page with the survey or can ask people by mail or in person to log on and visit the Web site with the survey. Another alternative is to post a survey on a Web site and invite the site’s visitors to complete the survey. This approach does not generate a random sample, so the results probably do not reflect the opinions of the entire population. Interactive technology makes it easy to use branching questions; the survey can automatically send respondents to the next question on a branch. However, many people worry about the privacy of online surveys, so they may be reluctant to participate. Researchers have found that a lower percentage of people are willing to complete online surveys than other kinds. To encourage participation, researchers should make online surveys as short as possible. Few people are likely to bother finishing a 32-screen survey, as one bank discovered.8
A major concern with any kind of survey is the response rate, the percentage of people who respond. People who refuse to answer may differ from those who respond, and you need information from both groups to be able to generalize to the whole population. Low response rates pose a major problem, especially for phone surveys. Answering machines and caller ID are commonly used to screen incoming calls resulting in decreased response rates.
Widespread use of cell phones in recent years has also negatively affected the ability of telephone surveyors to contact potential respondents.9 Federal figures show that 3 out of 10 households either have no landline phone or rarely answer it. People in the latter category often rely so heavily on their cell phones that they assume calls on the landline are telephone solicitors or surveys and thus do not answer. People who have only cell phones are young, less affluent, unmarried, and less likely to own their home.10 These figures show that phone surveys that are land-line only, as is true for most, may have significant biases built into their samples.
And the Survey Says . . .
Increasingly, companies use surveys to measure their customer’s satisfaction with their products or services. But are they really using that data? A survey by Bain & Company of 362 companies and their customers has revealed a discrepancy between the companies’ and consumers’ perceptions of customer satisfaction. The survey found that while 80% of the companies thought they were providing a “superior” consumer experience, only 8% of the customers described their experience that way.
Unfortunately, an even wider disconnect exists between measuring customer satisfaction and changing corporate business practices to achieve it. Getting data is one thing. But, circulating the findings and making sure the findings are put to use is another. So, when you go to the work of collecting data, make sure that you also make the data work for you.
Adapted from Christopher Meyer and Andre Schwager, “Understanding Customer Experience,” Harvard Business Review 85, no. 2 (February 2007): 116–26.
Page 542The problem of nonresponse has increased dramatically in recent years. The University of Michigan’s Survey of Consumer Attitudes experienced only a small drop in response rate between 1979–1996, from 72% to 67%. But the deterioration in response rate has accelerated since then, and in 2003 the response rate had dropped to 48%.11
According to figures that researchers have reported to the Marketing Research Association, the response rate for door-to-door surveys was 53%, and the response rate for face-to-face surveys in malls and other central locations was 38%. The response rate for Web surveys averaged 34%.12 To get as high a response rate as possible, good researchers follow up, contacting non-respondents at least once and preferably twice to try to persuade them to participate in the survey. Sometimes money or other rewards are used to induce people to participate.
Online Polling: Pros and Cons
Many polls used to be conducted by phone. In the last few years, however, companies have turned to online market research. The advantages are obvious: consumers can take online surveys at their convenience, and researchers can ask questions that are uncomfortable to ask in person (“How often do you bathe?”). Additionally, online research can cover numerous respondents and is faster and cheaper.
However, companies such as P&G doubt the reliability of online market research, which critics say is not based on random sampling. As an example, P&G cited the case of an online market research company that described a concept as attractive but in a poll a week later found it “below average.” Jon Krosnick, a Stanford professor, says that “drawing hard conclusions from online polls can be like making an automobile out of soft plastic.”
On the other side, Benjamin Malbon of the ad agency Bartle Bogle Hegarty believes that online research can be beneficial if it researches a specific target audience, such as housewives or people who like tennis. Such research need not be totally representative, he states.
Despite skepticism, online market research is likely to stay and become more reliable. Knowledge Networks, a market research company, has found a happy medium between the rigors of random sampling and the convenience of online research. It first telephones randomly selected respondents, then asks them to take the online survey.
Adapted from Burt Helm, “Online Polls: How Good Are They?” in BusinessWeek: Marketing, http://www.businessweek.com/magazine/content/08_24/b4088086641658.htm (accessed April 10, 2009).
Selecting a sample for surveys and interviews
To keep research costs reasonable, usually only a sample of the total population is polled. How that sample is chosen and the attempts made to get responses from nonrespondents will determine whether you can infer that what is true of your sample is also true of the population as a whole.
A sample is a subset of the population. The sampling units are those actually sampled. Frequently, the sampling unit is an individual. If a list of individuals is not available then a household can be the sampling unit. The list of all sampling units is the sampling frame. For interviews, this could be a list of all addresses, or for companies a list of all Fortune 500 CEOs.13 The population is the group you want to make statements about. Depending on the purpose of your research, your population might be all Fortune 1000 companies, all business students at your college, or all consumers of tea in the mid-Atlantic states.
A convenience sample is a group of subjects who are easy to get: students who walk through the union, people at a shopping mall, workers in your own unit. Convenience samples are useful for a rough pretest of a questionnaire and may be acceptable for some class research projects. However, you cannot generalize from a convenience sample to a larger group.
A purposive or judgment sample is a group of people whose views seem useful. Someone interested in surveying the kinds of writing done on campus might ask each department for the name of a faculty member who cared about writing, and then send surveys to those people.
In a random sample, each person in the population theoretically has an equal chance of being chosen. When people say they did something randomly they often mean without conscious bias. However, unconscious bias exists. Someone passing out surveys in front of the library will be more likely to approach people who seem friendly and less likely to ask people who seem intimidating, in a hurry, much older or younger, or of a different race, class, or sex. True random samples rely on random digit tables, published in statistics texts and books such as A Million Random Digits. An online random number table site can be found at http://ts.nist.gov/WeightsandMeasures/upload/AppendB-HB133-05-Z.pdf. Computers can also be programmed to generate random numbers.
Page 543
Speedo conducted extensive research before launching its new LZR Racer Speedo, which enabled many swimmers to break records at the Beijing Olympics.
Source: Christopher Rhodes and Hiroko Tabuchi, “Olympic Swimmers Race to Get Well Suited,” Wall Street Journal, June 12, 2008, B8.
If you take a true random sample, you can generalize your findings to the whole population from which your sample comes. Consider, for example, a random phone survey that shows 65% of respondents approve of a presidential policy. Measures of variability should always be attached to survey-derived estimates like this one. Typically, a confidence interval provides this measure of variability. Using the confidence interval, we might conclude it is likely that between 58% and 72% of the population approve of the presidential policy when the confidence interval is ± 7%. The accuracy range is based on the size of the sample and the expected variation within the population. Statistics texts tell you how to calculate these measures of variability.
Do not confuse sample size with randomness. A classic example is the 1936 Literary Digest poll which predicted Republican Alf Landon would beat Democrat incumbent President Franklin Roosevelt. Literary Digest sent out 10 million ballots to its magazine subscribers as well as people who owned cars and telephones, all of whom in 1936 were richer than the average voter—and more Republican.14
Many people mistakenly believe any survey provides information about the general population. One survey with a biased sample that got much publicity involved “sexting.” “One in five teenagers electronically share nude or seminude photos of themselves” declared the news stories. However, the sample for the survey came from a teenage research panel formed by phone and online recruiting, plus recruiting from existing panel members. This sample included many electronically savvy users, users who would be comfortable sending pictures electronically. So the survey showed the Page 544figures were true for this panel (and even there, the numbers were raised by responses from 18- and 19-year-old panelists, a group less inhibited than younger teens and more likely to respond), but because the panel is not a representative sample, no conclusions should be drawn about a wider population.15
Conducting research interviews
Schedule interviews in advance; tell the interviewee about how long you expect the interview to take. A survey of technical writers (who get much of their information from interviews) found that the best times to interview subject matter experts are Tuesdays, Wednesdays, and Thursday mornings.16 People are frequently swamped on Mondays and looking forward to the weekend, or trying to finish their week’s work on Fridays.
Interviews can be structured or unstructured. In a structured interview, the interviewer uses a detailed list of questions to guide the interview. Indeed, a structured interview may use a questionnaire just as a survey does.
In an unstructured interview, the interviewer has three or four main questions. Other questions build on what the interviewee says. To prepare for an unstructured interview, learn as much as possible about the interviewee and the topic. Go into the interview with three or four main topics you want to cover.
Interviewers sometimes use closed questions to start the interview and set the interviewee at ease. The strength of an interview, however, is getting at a person’s attitudes, feelings, and experiences. Situational questions let you probe what someone would do in a specific circumstance. Hypothetical questions that ask people to imagine what they would do generally yield less reliable answers than questions about critical incidents or key past events.
Hypothetical question: |
What would you say if you had to tell an employee that his or her performance was unsatisfactory? |
Critical incident question: |
You’ve probably been in a situation where someone who was working with you wasn’t carrying his or her share of the work. What did you do the last time that happened? |
A mirror question paraphrases the content of the last answer: “So you confronted him directly?” “You think that this product costs too much?” Mirror questions are used both to check that the interviewer understands what the interviewee has said and to prompt the interviewee to continue talking. Probes follow up an original question to get at specific aspects of a topic:
Question: |
What do you think about the fees for campus parking? |
Probes: |
Would you be willing to pay more for a reserved space? How much more? Should the fines for vehicles parked illegally be increased? Do you think fees should be based on income? |
Probes are not used in any definite order. Instead, they are used to keep the interviewee talking, to get at aspects of a subject that the interviewee has not yet mentioned, and to probe more deeply into points that the interviewee brings up.
If you read questions to subjects in a structured interview, use fewer options than you might in a written questionnaire.
Ethical Issues in Interviewing
If you’re trying to get sensitive information, interviewees may give useful information when the interview is “over” and the tape recorder has been turned off. Is it ethical to use that information?
If you’re interviewing a hostile or very reluctant interviewee, you may get more information if you agree with everything you can legitimately agree to, and keep silent on the rest. Is it ethical to imply acceptance even when you know you’ll criticize the interviewee’s ideas in your report?
Most people would say that whatever public figures say is fair game: they’re supposed to know enough to defend themselves. Do you agree?
Many people would say that different rules apply when you’ll cite someone by name than when you’ll use the information as background or use a pseudonym so that the interviewee cannot be identified. Do you agree?
As a practical matter, if someone feels you’ve misrepresented him or her, that person will be less willing to talk to you in the future. But quite apart from practical considerations, interview strategies raise ethical issues as well.
I’m going to read a list of factors that someone might look for in choosing a restaurant. After I read each factor, please tell me whether that factor is Very Important to you, Somewhat Important to you, or Not Important to you.
If the interviewee hesitates, reread the scale.
Always tape the interview. Test your equipment ahead of time to make sure it works. If you think your interviewee may be reluctant to speak on tape, take along two tapes and two recorders; offer to give one tape to the interviewee.
Pulitzer Prize winner Nan Robertson offers the following advice to interviewers17 :
Do your homework. Learn about the subject and the person before the interview.
To set a nervous interviewee at ease, start with nuts-and-bolts questions, even if you already know the answers.
Save controversial questions for the end. You’ll have everything else you need, and the trust built up in the interview makes an answer more likely.
Go into an interview with three or four major questions. Listen to what the interviewee says and let the conversation flow naturally.
At the end of the interview, ask for office and home telephone numbers in case you need to ask an additional question when you write up the interview.
Well-done interviews can yield surprising results. When the owners of Kiwi shoe polish interviewed people about what they wanted in shoe care products, they learned that shiny shoes were far down on the list. What people cared most about was how fresh and comfortable their shoes were on the inside. So Kiwi developed a new line of products, including “fresh’ins” (thin, lightly scented shoe inserts) and “smiling feet” (cushioning and nonslip pads and strips).18
Using Focus Groups
A focus group, yet another form of qualitative research, is a small group of people convened to provide a more detailed look into some area of interest—a product, service, process, concept, and so on. Because the group setting allows members to build on each other’s comments, carefully chosen focus groups can provide detailed feedback; they can illuminate underlying attitudes and emotions relevant to particular behaviors.
Focus groups also have some problems. The first is the increasing use of professional respondents drawn from databases, a practice usually driven by cost and time limitations. The Association for Qualitative Research Newsletter labeled these respondents as a leading industry problem.19 In order to get findings that are consistent among focus groups, the groups must accurately represent the target population. A second problem with focus groups is that such groups sometimes aim to please rather than offering their own evaluations.
An updated version of the focus group is the online network. Del Monte, for instance, has an online community, called “I Love My Dog,” of 400 hand-picked dog enthusiasts that it can query about dog products. These networks, first cultivated as research tools by technology and video game companies, are Page 546being employed by various producers of consumer products and services. The networks are often cheaper and more effective than traditional focus groups because they have broader participation and allow for deeper and ongoing probing. Companies can use them for polls, real-time chats with actual consumers, and product trials.20
Nokia’s Global Research
Nokia is working to add new customers in emerging markets. To do so, their researchers spend time with people around the world to understand communication behaviors. Other researchers look at both long- and short-term trends for colors, surface textures, and user choices.
One key finding was that in rural areas, mobile phones are shared by families or even villages. Therefore, Nokia designed new phones that are sturdy enough to withstand usage from different people: they have a special grip area that makes them easier to hold in hot climates and seamless keypads to keep out dust. Address books allow each user to save his or her own contacts and numbers separately.
The phones also have a demo mode so people with limited or no experience can quickly learn how to use the phones.
Adapted from Nandini Lakshman, “Nokia’s Global Design Sense,” in Inside Innovation, http://www.businessweek.com/print/innovate/content/aug2007/id20070810_686743.htm (accessed April 10, 2009).
Observing Customers and Users
Answers to surveys and interviews may differ from actual behavior—sometimes greatly. To get more accurate consumer information, many marketers observe users. For example, one problem with asking consumers about their television-watching behavior is that they sometimes underreport the number of hours they watch and the degree to which they watch programs they aren’t proud of liking. Researchers have tried to develop a variety of measurement methods that collect viewing data automatically. Arbitron introduced the Portable People Meter (PPM), which receives an inaudible electronic signal from radio stations and broadcast and cable TV stations. Consumers simply carry the PPM, and it records their media exposure. One of the first results showed that consumers listened to radio more than they had indicated in diaries.21 Nielsen Media Research has added commercial viewings to its famous TV show numbers; advertisers are naturally anxious to know how many people actually watch commercials instead of leaving to get a snack or fast-forwarding through them on digital video recorders.22 Nielsen has also started tracking college students’ viewing, installing its people meters in commons areas such as dorms. The new data boosted ratings for some shows, such as Grey’s Anatomy and America’s Next Top Model, by more than 35%.23
Observation can tell marketers more about customers than the customers can put into words themselves. Intuit, a leader in observation studies, sends employees to visit customers and watch how they use Intuit products such as QuickBooks. Watching small businesses struggle with QuickBooks Pro told the company of the need for a new product, QuickBooks Simple Start.24 Nokia sends researchers to various countries to learn about local tastes in mobile phones. They have learned that in India the phone is a status symbol and therefore must have the right style and project the proper image. In China and Africa, on the other hand, price is paramount.25 See the sidebar on this page for more about Nokia’s research.
Kroger has researched customer shopping patterns to help increase sales.
Source: Aili McConnon, “Grocers: A Shift toward Thrift,” BusinessWeek, August 25/September 1, 2008, 72, 73.
Page 547Observation can also be used for gathering in-house information such as how efficiently production systems operate and how well employees serve customers. Many businesses use “mystery shoppers.” For instance, McDonald’s has used mystery shoppers to check cleanliness, customer service, and food quality. The company posts store-by-store results online, giving store operators an incentive and the information they need to improve quality on measures where they are slipping or lagging behind the region’s performance.26
Even health care facilities use mystery shoppers. After they give their reports, the most common changes are improved estimates of waiting times and better explanations of medical procedures. So many organizations use mystery shoppers that there is a Mystery Shopping Providers Association; it reported $600 million revenue for the industry in 2004.27
Observation is often combined with other techniques to get the most information. Think-aloud protocols ask users to voice their thoughts as they use a document or product: “First I’ll try. . . .” These protocols are tape-recorded and later analyzed to understand how users approach a document or product. Interruption interviews interrupt users to ask them what’s happening. For example, a company testing a draft of computer instructions might interrupt a user to ask, “What are you trying to do now? Tell me why you did that.” Discourse-based interviews ask questions based on documents that the interviewee has written: “You said that the process is too complicated. Tell me what you mean by that.”
Looking with the Customers’ Eyes
IDEO, a design firm based in Palo Alto, California, uses observational research to design work processes that improve the customer’s experience. IDEO requires its clients to participate in the research so that they can see how it feels to be one of their own customers. Clients may try using the company’s product or go on shopping trips, or they may quietly observe customers. Following an initial observation phase, IDEO works with clients to use the observation data for brainstorming. IDEO then prepares and tests prototypes of the redesigned service, refines the ideas, and puts the revisions into action.
IDEO helped Kaiser Permanente revise its long-term growth plan to be more focused on clients’ experiences with the health system. Working in teams with nurses, doctors, and managers from Kaiser, IDEO employees observed patients and occasionally role-played patient experiences. They saw that the check-in process was annoying, and waiting rooms were uncomfortable. Many of the patients arrived with a relative or friend for support, but they were often not allowed to remain together. Sitting alone in examination rooms was unpleasant and unnerving.
Based on these observations, Kaiser realized that it needed to focus more on improving patient experiences than on the original plan of modernizing buildings. The company created more comfortable areas in which patients could wait with family and friends, as well as examination rooms large enough to accommodate two people in addition to the patient. Instructions on where to go were made clearer as well.
Adapted from Bruce Nussbaum, “The Power of Design,” BusinessWeek, May 17, 2004, 86.
Source Citation and Documentation
In a good report, sources are cited and documented smoothly and unobtrusively. Citation means attributing an idea or fact to its source in the body of the report: “According to the 2000 Census . . . ” “Jane Bryant Quinn argues that. . . .” Citing sources demonstrates your honesty and enhances your credibility. Documentation means providing the bibliographic information readers would need to go back to the original source. The two usual means of documentation are notes and lists of references.
Failure to document and cite sources is plagiarism, the passing off of the words or ideas of others as one’s own. Plagiarism can lead to nasty consequences. The news regularly showcases examples of people who have been fired or sued for plagiarism. Now that curious people can type sentences into Google and find the sources, plagiarism is easier than ever to catch.
Note that citation and documentation are used in addition to quotation marks. If you use the source’s exact words, you’ll use the name of the person you’re citing and quotation marks in the body of the report; you’ll indicate the source in parentheses and a list of references or in a footnote or endnote. If you put the source’s idea into your own words, or if you condense or synthesize information, you don’t need quotation marks, but you still need to tell whose idea it is and where you found it. See Figures 17.7 and 17.8 for examples of quoting and paraphrasing.
Long quotations (four typed lines or more) are used sparingly in business reports. Since many readers skip quotes, always summarize the main point of the quotation in a single sentence before the quotation itself. End the sentence with a colon, not a period, since it introduces the quote. Indent long quotations on the left and right to set them off from your text. Indented quotations do not need quotation marks; the indentation shows the reader that the passage is a quote.
Report Paragraphs with APA Documentation
Page 549Figure 17.8 Report Paragraphs with MLA Documentation
Page 550Figure 17.9 APA Format for Sources Used Most Often in Reports
Figure 17.9 APA Format for Sources Used Most Often in Reports (Concluded)
To make a quotation fit the grammar of your report, you may need to change one or two words. Sometimes you may want to add a few words to explain something in the longer original. In both cases, use square brackets to indicate words that are your replacements or additions. Omit any words in the original source that are not essential for your purposes. Use ellipses (spaced dots) to indicate where your omissions are. See Figures 17.7 and 17.8 for examples.
Document every fact and idea that you take from a source except facts that are common knowledge. Historical dates and facts are considered common knowledge. Generalizations are considered common knowledge (“More and more women are entering the workforce”) even though specific statements about the same topic (such as the percentage of women in the workforce in 1975 and in 2000) would require documentation.
Page 552Figure 17.10 MLA Format for Sources Used Most Often in Reports
Figure 17.10 MLA Format for Sources Used Most Often in Reports (Concluded)
Page 553
The three most widely used formats for footnotes, endnotes, and bibliographies in reports are those of the American Psychological Association (APA), the Modern Language Association (MLA), and the University of Chicago Manual of Style format, which this book uses. Internal documentation provides in parentheses in the text the source where the reference was found. For MLA format, the source is indicated by the author’s last name (if that isn’t already in the sentence), or by the last name plus the date of the work if you’re using two or more works by the same author, or if the dates of the works are important. MLA internal documentation also provides the page number. For APA format, the source is indicated by the author’s last name plus the year, unless those items are already in the text. APA format gives page numbers only for quotes or in cases where readers may need help to find the exact location. The Page 554full bibliographical citation appears in a list of references or works cited at the end of the report.
Figures 17.7 and 17.8 show a portion of a report in APA and MLA formats, respectively, with the list of references (APA) or works cited (MLA). Figures 17.9 and 17.10 show the APA and MLA formats for the sources used most often in reports.
If you have used many sources that you have not cited, you may want to list separately both works cited and works consulted. The term bibliography covers all sources on a topic.
If you use a printed source that is not readily available, consider including it as an appendix in your report. For example, you could copy an ad or include an organization’s promotional brochure.
Summary of Key Points
Information reports collect data for the reader; analytical reports present and interpret data; recommendation reports recommend action or a solution.
A good purpose statement must make three things clear:
The organizational problem or conflict.
The specific technical questions that must be answered to solve the problem.
The rhetorical purpose (to explain, to recommend, to request, to propose) that the report is designed to achieve.
Use indexes and directories to find information about a specific company or topic.
To decide whether to use a Web site as a source in a research project, evaluate the site’s authors, objectivity, information, and revision date.
A survey questions a large group of people, called respondents or subjects. A questionnaire is a written list of questions that people fill out. An interview is a structured conversation with someone who will be able to give you useful information.
Good questions ask just one thing, are phrased neutrally, avoid making assumptions about the respondent, and mean the same thing to different people.
Closed questions have a limited number of possible responses. Open questions do not lock the subject into any sort of response. Branching questions direct different respondents to different parts of the questionnaire based on their answers to earlier questions. A mirror question paraphrases the content of the last answer. Probes follow up an original question to get at specific aspects of a topic.
A convenience sample is a group of subjects who are easy to get. A judgment sample is a group of people whose views seem useful. In a random sample, each person in the population theoretically has an equal chance of being chosen. A sample is random only if a formal, approved random sampling method is used. Otherwise, unconscious bias can exist.
Citation means attributing an idea or fact to its source in the body of the report. Documentation means providing the bibliographic information readers would need to go back to the original source.