A key concept for determining the quality of any research is accuracy or, in other words, the degree to which the measured estimates correspond to reality. No research in any field is absolutely accurate; errors will occur in the course of every measurement. And it is because of these errors that measured results are not a perfect reflection of reality. In public opinion research this means, for example that the share of people a survey identifies as supporting a particular opinion will differ from the real share of people in the population who have that opinion. Researchers in every scientific field devote extensive attention to this problem and are as a consequence able to take steps to reduce or eliminate error or at least be able to adequately calculate the level of error and estimate how much their results differ from reality. There are two main types of error that affect surveys:
This kind of error arises when we include only a limited number of people in the research (e.g. 1000 respondents) and then apply the observations to the larger target population as a whole (e.g. citizens of the Czech Republic over the age of 15). Had we chosen a different 1000 respondents, the results would also have been slightly different. In the case of a representative sample of respondents where we ensure the randomness of the sampling based on probability theory, we are able to calculate the size of this inaccuracy, and that way we know that we need to take into account a error of, for instance, +/- 1% or +/- 2.5%. And we also know, for example, that the size of this error decreases as the number of respondents increases, and that a representative sample of approximately 1000 respondents produces results that have a very small degree of error.
This type of error is unrelated to the design or process of sampling and its source lies rather in all the other factors that are involved in the research process. The causes of this type of error can be sought, for example, in how a questionnaire is built (the wording of the questions and the response options, the order of the questions in the questionnaire, the mode of interview– e.g. self-administrated questionnaire or personal interview), in the process of data collection (e.g. people may be unwilling to answer some questions or may not understand the instructions they are given, interviewers may mistakes when interviewing), and even in the final data analysis (e.g. errors that occur when working with electronic files). Non-sampling errors are also the subject of extensive research, which is why much is already known about the effect they have on research results and about how to avoid or at least reduce such errors. It is therefore crucial to know what the risks are and for everyone involved in the research to work with precision (from the researchers who build the questionnaire and process the data to the interviewers who conduct the interviews). It is very difficult to estimate the size of a non-sampling error, but the best guideline even for lay readers to determine this is to look at the accompanying information that describes how the research was carried out.
All research results are therefore data that are inevitably marked by some error that arises in the measurement process (in expert terminology these results are called ‘estimates’). It is therefore impossible to read these data in a perfectly literal sense or in isolation. To assess how reliable and accurate they are, it is necessary to have information about how the particular survey was conducted and also to bear in mind that there is likely a slight difference between the estimates produced in the survey and reality.