You are currently viewing STATISTICAL THINKING TOWARDS CONSERVATISM

STATISTICAL THINKING TOWARDS CONSERVATISM

Spread the love

STATISTICAL THINKING TOWARDS CONSERVATISM

Researchers are usually passionate to obtain positive results and in the process might favor the conclusion that incline to over-interpret the data and so also, it is believed that it is the statistician’s role to add objectivity to the interpretation of the data and to back carefulness and thoughtfulness. However, researchers may also say that conservatism and science are incompatible stating being too careful and thoughtful to protect oneself against the worst-case scenario, could hinder the ability to make innovative and bold discoveries.

 

OBJECTIVE RESEARCH

Objective research has to do with establishing corresponding law-related generalities that are functional or applicable able to similar scenarios or occurrences and maybe in diverse or distinctive scenarios and situations. This standpoint of OBJECTIVITY can also be referred to as “POSITIVISM” which critically has to do with statistical analysis and simplification of data. Positivist researchers adopt quantitative methodologies to measure and get recorded facts that are gathered for data analysis. When it comes to interpretation, Interpretivists are apprehensive about elucidating and explanation of qualitative data in terms of words or images. This is the main reason why most researchers operating base on Objectivist and Positivist suppositions adopt qualitative data analysis whilst Interpretivists occasionally implement quantitative data analysis methods. 

 

SUBJECTIVE RESEARCH

Subjective research could be simply seen as phenomenal-oriented research; since it is based on the study of pieces of knowledge and understandings from the standpoint of a researcher in person, and stresses the significance based on personal viewpoints and evaluation that leads to interpretations. Subjective research evaluates data obtained from observed occurrences which may be an unstructured or semi-structured questionnaire or interviews. An unstructured questionnaire is based on questions structured from the dialogue between the questioner and the respondent. In semi-structured conversations, the questioner prepares an outline of the questionnaire with more questions added if the need arises where Structured questions contain the full list of questions. Subjective research assesses and evaluates recorded data or information where the researcher will make discussion and interpretation base on personal observations and facts acquired when collecting data and the analysis of data as well. Interpretive researchers called “INTERPRETIVIST” researchers, also adopt this methodology because they believe in a need to comprehend, justify and defend precise findings considering population or sampled personalities or groups’ emotions and opinions in the subject matter. 

 

 

OBJECTIVE OR SUBJECTIVE: WHICH OF THE TWO APPROACH IS PREFERABLE?

Have you made an attempt or exposed to a scenario that requires analyzing data either presently or in the past? What do you denote about the analysis used to be beneficial or be harmful when it comes to over-interpreting data or conservatism and science approaches?

For instance, numerous methodical journal resolve to rejecting a research paper except the chief findings are statistically significant (p-value < 5%), this normally pose the question as to if one could publish results that show a significant level of 10%.

 

The Statistician has to make sure all the results conveyed are true and testable by others which means accurate information is required. That is the more reason why statistical inferencing is always used to unravel the reason behind something; however, testing of hypothesis remains critical as to the significant level and nothing stops a Statistician from using a probability value different from 0.05 or even going as high as 10%, one thing we have to put in mind is that the lower the probability value used in a research study or analysis, the more accurate and higher chances of null hypothesis not being rejected.

 

A probability level is a tool for quantifying the probability of observing events stated in the null hypothesis. It provides an accurate estimate of the probability of a null hypothesis been true to proof that the strength of an alternative hypothesis is wrong and so it is always easier to reject a null hypothesis with 0.05 than for p=0.1 or 10% even though probability values do not provide any hint as to if the probability of an assumption is true or not.

 

A lower probability level would be fewer errors while a higher level could mean chances of more error available which tends to speak on the accuracy of the research, a research wants to provide if not all, a close to 100% accurate result.

 

To me, I do not think there would be a harm in the level of probability value of 5% or 10% is used, it’s up to the researcher to decide which would be the best option for the research at hand, calculating the strength or validity of the statistical test at a precise probability value would be more essential in my own opinion.

 

The significance level of 0.05 has been a general probability level, just as mentioned, depends on the researcher to decide which would be the best fit base on the problem statement. For some problems, the significance level of 10% may be the best fit while for some, 5% or even as low as 1% significance level would be the best option; however, the researcher would need to relate the interpretation clearly explaining the significance and confidence level used especially to the targeted audience so they can have a clear understanding and everyone would remain on the same page, this would also help in case of a future investigation by other researchers who may decide to carry out the same research using a different significance level of probability.

 

In statistical thinking there is a tenancy towards conservatism

 

Leave a Reply