Senin, 02 Juli 2018

Sponsored Links

Ask the Experts: Survey method in research
src: qualtrics.com

The field of applied statistics of the human research survey, survey methodology studied sampling of individual units of the population and related survey data collection techniques, such as questionnaire making and methods for increasing the number and accuracy of responses for surveys. The survey methodology includes an instrument or procedure that asks one or more questions that may or may not be answered.

Researchers conducted a statistical survey with a view to drawing statistical conclusions about the population under study, and such conclusions largely depend on the survey questions used. Public opinion polls, public health surveys, market research surveys, government surveys and censuses are examples of quantitative research that uses survey methodologies to answer population questions. Although the census does not include "samples", they include other aspects of the survey methodology, such as questionnaires, interviewers, and non-response follow-up techniques. Surveys provide important information for all types of public and research information fields, for example, marketing research, psychology, provision of health services, and sociology.


Video Survey methodology



Ikhtisar

One survey was conducted at least one sample (or full population in the case of the census), data collection methods (eg, questionnaires) and individual questions or items that became data that could be analyzed statistically. A single survey can focus on different types of topics such as preferences (eg, for presidential candidates), opinions (eg, whether abortion should be legal?), Behavior (smoking and alcohol use), or factual information (eg, income), depending on the purpose. Because survey research is almost always based on population samples, the success of the study depends on the representation of the sample in relation to the target population of interest to the researcher. Target populations may range from the general population of a given country to a particular group of people in the country, to a list of membership of a professional organization, or list of students enrolled in the school system (see also sampling (statistics) and survey sampling). The person who answers the survey is called the respondent, and depending on the question asked, their answers may represent themselves as individuals, households, employers, or other organizations they represent.

The survey methodology as a scientific field seeks to identify principles on sample design, data collection instruments, statistical data adjustments, and data processing, and final data analysis that can make systematic and random survey errors. Survey errors are sometimes analyzed with respect to survey costs. Cost constraints are sometimes framed as an increase in quality within the constraints of cost, or alternatively, reducing costs for a fixed quality level. The survey methodology is a scientific and professional field, which means that some field professionals focus on empirical survey errors and other design surveys to reduce them. For the survey designer, the task involves making big decisions about thousands of individual features from the survey to improve it.

The most important methodological challenges of the survey methodology include making decisions on how to:

  • Identify and select potential potential members.
  • Contact individual samples and collect data from those who are difficult to reach (or are reluctant to respond)
  • Evaluate and test questions.
  • Choose mode to ask questions and collect responses.
  • Train and supervise the interviewer (if they are involved).
  • Check the data files for accuracy and internal consistency.
  • Adjust the survey estimate to correct the errors identified.

Maps Survey methodology



Select sample

Samples are selected from the sampling frame, which consists of a list of all members of the population of interest. The purpose of the survey is not to describe the sample, but the larger population. This generalization capability depends on the representation of the sample, as stated above. Each member of the population is called an element. There are many difficulties encountered when choosing representative samples. One common mistake generated is selection bias. Bias selection results when the procedure used to select the above sample results representation or under the representation of some significant aspects of the population. For example, if the interest population consists of 75% women, and 25% of men, and the sample consists of 40% women and 60% of men, women are represented while men are overrepresented. To minimize selection bias, stratified random sampling is often used. This is when the population is divided into sub-populations called strata, and random samples are taken from each strata, or elements taken for the sample proportionately.

DISM - Duke Initiative on Survey Methodology |
src: dism.ssri.duke.edu


Data collection mode

There are several ways to manage surveys. The choice between administration mode is influenced by several factors, including

  1. cost,
  2. coverage
  3. of the target population,
  4. the flexibility of asking a question,
  5. the wishes of the respondents to participate and
  6. response accuracy.

Various methods create a fashion effect that changes the way respondents answer, and different methods have different advantages. The most common administrative modes can be summarized as:

  • Phone
  • Mail (post)
  • Online surveys
  • Private home survey
  • Private center or street interception survey
  • The hybrid above.

Survey methodology Computer Icons Form Question - survey png ...
src: banner2.kisspng.com


Design research

There are several different designs, or overall structures, that can be used in survey research. Three common types are cross-sectional, independent samples in a row, and longitudinal studies.

Cross-sectional study

In a cross-sectional study, samples (or samples) were taken from the relevant population and studied once. A cross-sectional study describes the characteristics of the population at one time, but can not give any insight into the causes of population characteristics because it is a predictive and correlational design.

Study independent sample consecutive

An independent sample design that successively draws several random samples from a population at one or more occasions. This design can study changes in a population, but does not change in the individual because the same individual is not surveyed more than once. Therefore, such research can not identify the causes of change over time. In order for an effective independent sample design to be effective, samples must be drawn from the same population, and must equally represent it. If the sample is not comparable, changes between samples may be due to demographic characteristics rather than time. In addition, questions should be asked in the same way so that responses can be compared directly.

Longitudinal research

Longitudinal studies take the same random sample size at some point in time. Unlike the successive independent sample designs, this design measures the difference in response of each participant over time. This means that a researcher can potentially assess the reason for the change in response by assessing the differences in respondent experience. Longitudinal research is the easiest way to assess the effects of naturally occurring events, such as divorces that can not be tested experimentally. However, longitudinal studies are expensive and difficult to perform. It is more difficult to find samples that will do the study for months or years rather than the 15 minute interview, and participants often leave the study before the final assessment. The drop in participants is not random, so the sample may become less representative with successive assessments. To explain this, a researcher can compare respondents who left the survey to those who did not, to see if they were statistically different populations. Respondents may also try to be consistent themselves despite changes to survey answers.

Computer Icons Survey methodology Questionnaire - others png ...
src: banner.kisspng.com


Questionnaire

Questionnaires are the most common tools used in survey research. However, certain survey results are useless if the questionnaire is written inadequately. The questionnaire should produce a valid and reliable demographic variable size and should result in valid and reliable individual gaps generated by the self-report scale.

Questionnaire as a tool

The variable category that is often measured in survey research is the demographic variable, which is used to describe the characteristics of the people surveyed in the sample. Demographic variables include measures such as ethnicity, socioeconomic status, race, and age. Surveys often assess individual preferences and attitudes, and many employ a self-report scale to measure people's opinions and judgments about the various items presented on a scale. The self-report scale is also used to examine differences among large-scale people. This self-report scale, usually presented in the form of a questionnaire, is one of the most frequently used instruments in psychology, and it is therefore important that the steps are carefully crafted, while also reliable and valid.

Reliability and validity of self-report measurement

Reliable measures of self-report are defined by their consistency. Thus, a reliable self-report size produces consistent results every time it is executed. The reliability of the test can be measured in several ways. First, one can calculate the reliability of the retest. The reliability of the tests requires performing the same questionnaire for large samples at two different times. For questionnaires to be considered trustworthy, people in the sample should not have identical scores on each test, but their position in the distribution of scores should be similar for tests and retest. Self-reporting actions will generally be more reliable when they have many items that measure the constructs. Furthermore, the measurement will be more reliable when the measured factor has greater variability among the individuals in the sample being tested. Finally, there will be greater reliability when instructions for the completion of the questionnaire are clear and when there is limited interference in the test environment. By contrast, the questionnaire is valid if what is measured is what was originally planned to be measured. Building on the validity of a measure is the extent to which the original theoretical construct should be measured.

It is important to note that there is evidence to suggest that self-report measurement tends to be less accurate and reliable than alternative methods of assessing data (eg observational studies; for example, see.

Composing questionnaire

Six steps can be used to build a questionnaire that will produce reliable and valid results. First, one must decide what kind of information should be collected. Secondly, one must decide how to do the questionnaire. Thirdly, one should make the first draft of the questionnaire. Fourth, the questionnaire should be revised. Furthermore, the questionnaire should be tested. Finally, the questionnaire should be edited and the procedure of use should be determined.

Guidelines for effective selection of words

The way a question is phrased can have a major impact on how a research participant will answer the question. Thus, survey researchers should be aware of his words when writing survey questions. It is important for researchers to remember that different individuals, cultures, and subcultures can interpret certain words and phrases differently from each other. There are two different types of questions that survey researchers use when writing questionnaires: free response questions and closed questions. Free response questions are open, while closed questions are usually a multiple choice. Free response questions are useful because they allow greater flexibility of respondents, but they are also very difficult to record and assess, requiring extensive coding. By contrast, closed questions can be assessed and coded much more easily, but they reduce the expression and spontaneity of the respondents. In general, vocabulary questions should be very simple and straightforward, and most should be less than twenty words. Each question should be edited for "legibility" and should avoid asking questions that lead or load. Finally, if some items are used to measure one construct, words of some items should be worded in the opposite direction to avoid any response bias.

Respondents' answers to open-ended questions can be coded to the scale of responses thereafter, or analyzed using more qualitative methods.

Order of questions

Survey researchers should carefully frame the sequence of questions in the questionnaire. For self-administered questionnaires, the most interesting question is at the beginning of the questionnaire to capture the attention of the respondent, while the demographic question should be nearing the end. In contrast, if surveys are conducted over the telephone or in person, demographic questions should be given at the beginning of the interview to increase the respondents' trust. Another reason to pay attention to the sequence of questions may lead to a survey response effect in which one question may affect how people respond to the next question as a result of priming.

AC 1.2 present the survey methodology and sampling frame used ...
src: slideplayer.com


Nonresponse reduction

The following ways have been recommended to reduce nonresponse in telephone and face-to-face surveys:

  • A letter of advance. Short letters are sent in advance to inform sample respondents about upcoming surveys. Mail styles should be personalized but not excessive. First, announce that a phone call will be made, or the interviewer wants to make an appointment to conduct a face-to-face survey. Second, the research topic will be explained. Finally, this allows both the expression of the surveyor's award on cooperation and opening to ask questions on the survey.
  • Training. Interviewers are fully trained to ask respondents questions, how to work with computers, and make schedules for calls back to unreached respondents.
  • Short introduction. The interviewer should always start with a brief introduction of himself. He must give his name, the institute he worked for, the duration of the interview and the purpose of the interview. It can also be useful to explain that you are not selling anything: it has been shown to lead to a slightly higher response rate.
  • Respondent friendly survey questionnaire. The questions asked should be clear, not offensive and easily addressed for the subject under study.

Summarization is also commonly referred to as an increase in response rate. The 1996 literature review found mixed evidence to support these claims for both written and oral surveys, concluding that other factors may often be more important. A 2010 study that looked at 100,000 online surveys found the response rate dropped by about 3% on 10 questions and about 6% on 20 questions, with declines slowing (for example, only 10% reduction on 40 questions). Other studies have shown that the quality of the response is degraded towards the end of a long survey.

AC 1.2 present the survey methodology and sampling frame used ...
src: slideplayer.com


Interviewer effect

The survey methodology has devoted considerable effort to determining the extent to which interview responses are influenced by the interviewer's physical characteristics. Characteristics of the main interviewers who have been shown to influence survey responses are race, gender, and relative weight (BMI). The effect of this interviewer is very active when the question is related to the nature of the interviewer. Therefore, the race of the interviewer has been shown to influence responses to measures regarding racial attitudes, interviewer gender responses to questions involving gender issues, and answers to interviewer's BMI for food and diet related questions. While the effects of interviewers have been investigated primarily for face-to-face surveys, they have also been shown to be for non-visual interview mode, such as telephone surveys and in enhanced web video surveys. The usual explanation for the interviewer effect is social bias: survey participants can try to project a positive self-image in an attempt to conform to the norms they associate with the interviewer asking the question. The effect of the interviewer is one example of the effect of the survey response.


See also

  • Data Documentation Initiative
  • Enterprise feedback management (EFM)
  • Likert Scale
  • Official statistics
  • Paid surveys
  • Quantitative marketing research
  • Quiz construction
  • Trades estimator
  • Social research
  • Total survey errors



References




Further reading

  • Abramson, J.J. and Abramson, Z.H. (1999). Survey Methods in Community Medicine: Epidemiological Research, Program Evaluation, Clinical Trial (5th ed.). London: Churchill Livingstone/Elsevier Health Sciences ISBNÃ, 0-443-06163-7
  • AdÃÆ'¨r, H. J., Mellenbergh, G. J., and Hand, D. J. (2008). Advising on research methods: Consultant assistant . Huizen, Netherlands: Johannes van Kessel Publishing.
  • Andres, Lesley (2012). "Designing and Conducting Survey Research". London: Sage.
  • Dillman, D.A. (1978) Mail and phone surveys: Total design methods . New York: Wiley. ISBNÃ, 0-471-21555-4
  • Engel. U., Jann, B., Lynn, P., Scherpenzeel, A. and Sturgis, P. (2014). Improve Survey Method: Lessons from Recent Research . New York: Routledge. ISBN: 978-0-415-81762-2
  • Groves, R.M. (1989). Survey Errors and Survey Costs Wiley. ISBNÃ, 0-471-61171-9
  • Griffith, James. (2014) "Survey Research in Military Arrangements." in the Routledge Handbook of Research Methods in Military Study edited by Joseph Soeters, Patricia Shields and Sebastiaan Rietjens.pp.Ã, 179-193. New York: Routledge.
  • Leung, Wai-Ching (2001) "Doing Surveys", in Student BMJ, (British Medical Journal, Student Edition), May 2001
  • Ornstein, M.D. (1998). "Survey research." Current Sociology 46 (4): iii-136.
  • Prince, S.a, Adamo, K. B., Hamel, M., Hardt, J., Connor Gorber, S., & amp; Tremblay, M. (2008). Comparison of direct action versus self-report to assess physical activity in adults: a systematic review. International Journal of Behavioral Nutrition and Physical Activity, 5 (1), 56. http://doi.org/10.1186/1479-5868-5-56
  • Shaughnessy, J. J., Zechmeister, E. B., & amp; Zechmeister, J. S. (2006). Research Methods in Psychology (Seventh Edition ed.) . McGraw-Hill Higher Education. ISBNÃ, 0-07-111655-9 (pp.Ã, 143-192)
  • Singh, S. (2003). Advanced Sampling Theory with Applications: How Michael Elected Amy. Kluwer Academic Publishers, Netherlands.
  • Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan. (2014). Routledge Handbook Methods of Research in New York Military Studies: Routledge.
  • Surveys in Curlie (based on DMOZ)



External links

  • Media related to the Survey methodology in Wikimedia Commons

Source of the article : Wikipedia

Comments
0 Comments