Acta Universitatis Danubius. Relationes Internationales, Vol 9, No 1 (2016)
How to Identify Various Errors Made During the Survey Process
(Case Electorate Tirana)
Saniela Xhaferi1, Aleksandër Kocani2
Abstract: When talking about the error made during the conduct of surveys, usually is reported only one type of error, the simplest one that is identifiable and relates to the study of population sampling. One such error constitutes, as it were, “the tip of the iceberg” in the totality of the errors that are made during the survey process. It is calculated with the formula, or inferred from the so-called “curve of error depending on the sample size”. This error is issued only for the types of probabilistic sampling. It will express statistically, errors we make when the results obtained from the study of sampling units seek to extend to all units in the study population. Other errors are those associated with the reduction of the degree of representativeness of the sample to the population strata in the study. Also, unknown error it is one that enters by the decline of the quality of the interviews, or by the use of (on the last link of the sampling) a final quote. Another one is introduced by the lack of sincerity in respondents' answers when using probabilistic sampling tip and use questionnaires with questions perceived by the respondents as delicate.
Keywords: survey; errors; sampling; population
1. Introduction: Why the Focus on Errors?
In the social sciences and empirical sciences generally, as K.R. Popper states (Popper, 1982), we do not reach or identify the truth. So remain tracking routes to truth and care or investigation of errors we make when we follow them. Popper himself states that our practical concern remains to be that “... how can you hope to find and eliminate the error?” It is, as it were, “epistemology of error” the basic methodological orientation when conducting studies in social sciences. For this reason, we set the focus of our research on making evident, through the survey method, the errors made during the process of “seeking the truth”. Identification of errors made during the survey it constitutes in itself the search for truth process and, in this respect; it remains always a straightforward process.
The Myth of a Unique Standard Error in Surveys
During the different presentations of surveys results on visual3 and written media, it is shown up as a very important element of the presentation emphasis of the so-called “error made when conduct a survey” and that, according to the authors of the survey reveals “the degree of accuracy” of measurements carried out with the help of surveys. In the vast majority of cases it is mostly spoken for an error of ± 3.2 % order (even an error of ± 2.3 % order) related with surveys that use sampling of order about 1,000 units (individuals interviewed). The margin or interval of errors, which generally appears only as an indicator of the error committed during the survey, is in fact a kind of standard error, which is conditioned by the low level of representation in the study population when the population size substitutes the sampling size. Such an error constitutes, as it were, “the tip of the iceberg” in the entirety of errors that are made during the survey process. It is calculated with the formula, or inferred from the so-called “curve of error depending on the sample size”. In both cases, this error is issued only for types of probabilistic sampling. And it expresses statistically the error we make when the results obtained from the study of sampling units seek to extend to all units in the study population (Bernstein, 1992).
Another error we make is one that relates to the reduction of the degree of representativeness of the sample when for a certain size of it; we don’t keep in sampling the same reports that units and groups (or strata) have in their study population. Otherwise in technical language, it is said that this error is introduced when we don’t take un-weighted sampling. In this case it is stated that the sampling does not represent the population study in miniature. Margins of error are not given theoretically, but they should be determined experimentally by measuring procedures ad hoc. And they are also depending on our choice of level or chain of “chain sampling”, in which we will interfere with the un-weighted sampling. For this type of error, in the Department of Political Science has not yet begun procedures to measure its margins.
In books which elaborate knowledge about research methods, are noted/emphasized that to have more reliable data from the surveys, among others, it needs to avoid making delicate questions as perceived by the respondents. Such a perception is not universal, but local. It depends on the type of society, the level of political culture and social emancipation. So in developed democratic countries, making questions related to voting behavior of respondents, as well as their political preferences, are not perceived as delicate neither by the electorate and nor by social science researchers. This comes out because in these societies is realized not only theoretically (by law), but also practically freedom of expression for citizens. And this freedom is protected with “jealousy” by the relevant legal institutions. Therefore, the voters of these societies think they cannot be threatened by publicly expressing of their political and electoral preferences. So if we ask questions of this nature, there is no reason to wait insincere answers, answers that would affect the reliability of the survey data with these voters.
It is not the same with countries that have democracy under construction, as is the case of the Republic of Albania. In these countries, of course, are drafted/compiled laws to protect freedom of expression. But, there are generally problems with the implementation of laws, including the law that protects freedom of expression. Institutions are relatively fragile and virtually don’t guarantee individuals that will not be affected by the identification of their political beliefs and voting preferences. Therefore, the voters of these countries watch with concern the possibility of being identifiable, while answering questions perceived by them as delicate. So as defense mechanism they choose between two alternatives: to refuse to be interviewed or responding insincere. The first reaction increases the level of unrealized sketched sampling (if are used types of probabilistic sampling). But the second reaction increases indefinitely measurement error, thus reducing the reliability of the data. Both reactions are non-recommendable and would be better if they could be avoided. (Kocani, 2010)
The mind naturally goes first to avoid questions that are perceived as delicate by respondents. Unable to avoid such questions, it is suggested to become our best to ensure anonymity of respondents or, at least, the confidentiality of their responses. These two alternatives are recommended especially for studies where making delicate questions is considered as part of the study, and they cannot be avoided. Here, in our opinion, the problem arises if it is relevant to apply more alternative suggestion to provide security for maintaining the confidentiality of respondents' answers, unable to ensure their anonymity. Basically, it means a reliability of respondents to the liability of the promise given by the interviewer. Namely, in the belief that the interviewer will keep the promise to maintain as confidential answers to questions considered by them as delicate. This, in turn, implies that parameter of social confidentiality to be relatively high levels, something that is not achieved in any society. So, it is required that in the society in question most of voters to think it are possible to believe to others even when they do not know each other. In our case it comes to trust or not to trust interviewers, which are not recognized by the respondents. More specifically, is required to know how much the degree of distrust of “the other” is, or, as otherwise stated, the degree of diffidence.
From measurements made with surveys by researchers of the Department of Political Science, University of Tirana since 1998 (over 25 extended survey in 1998, 2001, 2003, 2005, 2007, 2008, 2009, 2010, 2011, 2012 and 2013), it appears that, except for the 2009 survey where diffidence rate fell to a minimum value of 61%, in all other surveys it has moved between approximately 80% and 90%. As an example we are bringing the degree of diffidence measured in late October of 2012. It is close to the value of 95 %. But in April 2013 it fell to approximately 92 %. Was approximately equally even in June 2013.
With such a diffidence degree, nor can be expected that respondents believe the promises of interviewers that everything that they say “will remain between them”. Therefore, to obtain more reliable data from surveys should not believe the “formula” that promises the confidentiality of the answers given by the respondents (Feraj & Kocani, 2013). If we go to the latter ones according to a sampling scheme of probabilistic type (casual or systematic), which requires that surveyed individuals to be contacted in a way that they perceive themselves as easily identifiable (by name, location, place of work), then doesn’t makes sense to talk about keeping their anonymity. The only thing that it can be promised to the respondents in this case, is to keep the confidentiality of their responses. But, as noted, such a promise is not credible in a society where diffidence degree is too high. Then, it remains to be designed a sampling that allows to ensure the anonymity of respondents. In other words, a sampling that allows respondents to perceive themselves as not identifiable and, thus, frees them from the fear of being in trouble if they will give honest answers to questions considered delicate. And this can be accomplished if in the design of the sampling, at the last link (which contacts people to be interviewed) uses the quota method (by age group and gender). According to our survey experience, since 1998, the use of quotas in this last link provides for real non identifiable of respondents, therefore ensure the sincerity of their answers. In this way, can be taken reliable data from measurements, from which can then be extracted relatively reliable conclusions, which constitutes the main goal of any research.
We reiterate that such a thing could be achieved if the interviews conducted not in family and environments where respondents can be easily identified as such, but on the road, away from their homes or working places, where they are seen as bystanders. But in this case, in the last link of sampling, we are forced to use a type of sampling that is not probabilistic type, the quota sampling. Such a sampling contains an unspecified error, non standard, which is required to be measured with an ad hoc procedure (Kocani, 2011). On the other hand, not using quota sampling on the last link, leads to not providing the anonymity of respondents. The latter, by itself, bring an additional error unstated by theory. It requires that the margins of this measurement error to be defined by the ad hoc procedure (which must be invented by interested researchers). For this is processed by two teams of surveyors which use the same questionnaire (which, in case of error associated with not taking into account the identity of the respondents when are asked delicate questions, includes even such questions) and follow in the last link of sampling two different ways: one of them is the method of quota, while other use systematic method “from gate to gate”. While for measurement of error that introduce quotas, the only difference in methodology is that by the two teams is used questionnaire that avoids delicate questions. In this way differences found in the values of indicators, are mainly conditioned by the use of quotas.
It is understandable that the margins of differences include in addition unknown error of quota and a known statistical error related to the sample size. So, by deducting the latter error, we reach the margins of error that introduces the use of quota on the last link of sampling. But at the margins of differences raised during the measurement of error that inserts neglect of interviewers’ anonymity, are included a known statistical error related with the size of the sampling, error that inserts quotas and the error that inserts neglect of interviewers’ anonymity. So, the measurement of this error margin requires to be defined in advance, margins of error that introduces the use of quota at the last link of sampling. This is one of the reasons why they were developed almost parallel to each other.
Since 2009, researchers groups near the Department of Political Sciences (FSS at UT) have taken out two waves of experiments to measure each of the above two types of error (one error that introduces quotas and the error introduced when not taking into account the identity of the respondents when are asked delicate questions). During the last academic year was realized the third wave of experiments in question, in order to be taken three margins of error for each type of errors. Determination of the margins of error for two types of errors mentioned above constitutes the focus of a doctoral work led by researchers at the Department of Political Science UT FSS.
Another error made in the survey process is also one that is conditioned by the poor quality of interview. In theory of research methods is noted that interviewers should be trained to behave as much as possible similar to each other, ideally as if they were a single person. Otherwise differences in interviews would be amplified during the understanding process of questions by respondents and would produce significant distortions in comparing responses. This will affect the legitimacy of comparing the responses to the same questions, introducing an error unforeseen by theory.
Even in this case it would have to be built ad hoc procedure for experimental measuring of error margins performed due to quality decline of interviews. The results from the survey of a wave of experiments for measuring the error margins (conducted by a team of researchers at the Department of Political Science at UT, FSS) and the observations made in this direction, it appears that mainly 3-4 first interviews are conducted with the best quality by the interviewers. Further, mainly because of fatigue, the quality falls, which is manifested in the creation of differences of values of different indicators (such as, the indicator of Subjective Welfare, the average of the values of responses, etc.), differences that constitute the additional error that comes due to decline of the quality of the interview. Currently we are collecting databases from surveys that have been conducted and are being conducted regarding electoral behavior or value profile of the electorate in the constituencies of Tirana, which further will be processed and will be accounted for differences in question to determine margins of error mentioned above. More specifically, from them will be extracted bases that include data that constitute the answers of 4 first interviews, 4 second interviews and, where possible, 4 third interviews. Differences, according to expectations, should come increased.
Usually during the sampling process are not include all areas that include analysis units of the population in study. In this case, is passed on to the so-called sampling with truss (clusters), which de facto reduces the degree of representativeness of the population in study, because of selected sampling. Consequently, is introduced an additional error that is not defined (in size) from the theory. To identify the average margins of error, as for any other case that is not included as size in theory, there should be built ad hoc measurement procedures. The proceedings here are relatively simple. There are collected databases that are produced by waves of surveys on voting behavior or on value profile that has been performed and are performed by teams of researchers in the Department of Political Sciences (at FSS, UT). Further comparisons will be made between arrays of different indicators arising from bases created with data obtained from surveys at 2, 4, 8 and 11 constituencies of the city of Tirana. Already started the procedures for measuring these margins and their results, as well as those of error that comes from the decline of the quality of research interviews; are parts of the framework of a doctoral work.
References
Bernstein, R.A. & Dyer, J.A. (1992). An Introduction to Political Science Methods, 3rd Edition. New Jersey: Prentice Hall.
Feraj, H. & Kocani, A. (2013). Anonymous or Confidentiality in Opinion Survey? (The Selective Sampling in Surveys approach where are Included Delicate Questions). Mediterranean Journal of Social Sciences, Vol 4 No 2, May.
Kocani, A. (2010). A methodological approach: the measurement error in the survey is not taken into consideration when changing socio-political environment (case Zogby polls in Bangladesh). ACTS scientific journal of Alb-Science Institute, Vol. III, No. 4.
Kocani, A. (2011). Defining the margin of error that introduces quota in the last link of a survey sampling versus systematic sampling. ACTS scientific journal of Alb-Science Institute, Vol. IV, No. 4.
Miller, D. (2000). Selected Works Popper. Tirana: Venus & Soros.
Popper, R.K. (1982). La Logique de la découverte scientifique/The Logic of Scientific Discovery. Paris: Payot.
1 PhD Student, Faculty of Social Sciences, University of Tirana, Albania, Address: Rruga Eduard Mano, Selitë 2, Tel.: +355 4 246 3308, Albania, Corresponding author: sanielaxhaferi@gmail.com.
2 Professor, PhD, Faculty of Sociale Sciences, Tirana University, Albania, Address: Rruga Eduard Mano, Selitë 2, Tel.: +355 4 246 3308, Albania, E-mail: alkocani@gmail.com.
AUDRI, Vol. 9, no 1/2016, pp. 132-138
3 It suffices to recall here Albanian TV presentations on the results of surveys conducted by the Zogby, or those carried out by an Italian company (on News 24 TV) and the results of the barometer in this television channel.
Refbacks
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 4.0 International License.