Thursday, November 23, 2017

Uncertain Composition of Critical Thinking Abilities

There is no reason to believe critical thinking is a narrow enough quality or a set of qualities which can be defined or circumscribed. Therefore, an appropriate “definition” of critical thinking is the one given by Facione (1998); according to whom  “The behaviors or habits of mind associated with critical thinking include asking questions, defining a problem, examining evidence, analyzing assumptions and biases,  avoiding over-simplification, reflecting on other interpretations, and tolerating ambiguity” (as cited in Visser, Visser, & Schlosser, 2003, p. 401). Consequently, there is no reason to believe that any particular sets of skills, characteristics, or practices, which can be associated with critical thinking, can be said to be more or less beneficial or important than others. This can even be seen from the influential models of critical thinking described by Kelly (2014).

Thus, for example, Kitchener and King's (1981) model of critical thinking is focused on measuring the ability of individuals to solve “well-structured” and “ill-structured” problems; where it appears that well-structured problems are those in which the solution almost inevitably follows from the facts of the case (which are fixed); while the ill-structured problems are those in which multiple probable solutions are possible, and the facts of the case are not fixed (as cited in Kelly, 2014). Hence, it seems safe to conclude that the “well-structured” problems of this model are those that can largely be solved through simple deductive reasoning; while coming up with possible solutions to “ill-structured” problems commonly requires several lines of complex inductive reasoning. Thus, it appears that Kitchener and King (1981) view critical thinking as a progressive ability to engage in ever more complex use of standard logic (from simple deductive to complex inductive logic). And it seems clear that such a view of critical thinking is extremely limited.

After all, the often complex “reasoning” performed by computers is based exclusively on formal logic. However, few, if any, people would say that computers are capable of critical thought. On the other hand, the first three levels of Kitchener and King's (1981) model involve the reasoning abilities of supposed people who are only capable of two-valued reasoning, in which all propositions are seen as either true or false. Consequently, the supposed individuals falling into the first three levels of Kitchener and King's (1981) model are unable to distinguish between “well-structured” and “ill-structured” problems; since they view both as equally well defined and either true or false (as cited in Kelly, 2014).

Hence, Kitchener and King's (1981) model looks rather strange. After all, many computer programs use non-classical logics, which classify propositions into multiple categories (not just true and false).  Moreover, computers are incapable of complex inductive reasoning (especially when the set of premises is not fixed). And, therefore, would immediately give out upon encountering an “ill-structured” problem; thus, illustrating their ability to “distinguish” between “well-structured” and “ill-structured” problems; something that some people, according Kitchener and King's (1981) model, supposedly lack. 

Thus, the fact that computers “possess” critical thinking skills, which some people, according to Kitchener and King (1981), apparently lack, illustrates the great vagueness and breadth of the critical thinking concept and the consequent impossibility of deciding which possible aspects of critical thinking can be said to be more or less beneficial or important than others.

References

Facione, P. (1998). Critical thinking: What it is and why it counts. Santa Clara, CA: California Academic Press.

Kitchener, K. S., & King, P. M. (1981). Reflective judgment: Concepts of justification and their relationship to age and education. Journal of Applied Developmental Psychology, 2, 89-116.

Kelly, S. (2014). Critical thinking: The means to inquire. In A. DiVincenzo (Ed.), Find Your Purpose: The Path to a Successful Doctoral Experience. Phoenix, AZ: Grand Canyon University.


Visser, L., Visser, Y. L., & Schlosser, C. (2003). Critical thinking distance education and traditional education. Quarterly Review of Distance Education, 4(4), 401-407.

Tuesday, April 19, 2016

Exploring the Theoretical Basis of the Case Study Method

According to Stake (2005), the case study is not a research methodology, but a study of a particular case, using any desirable/relevant research methods (as cited in Thomas, 2011). The case, which is studied, can be an institution, a project, a period, a person, an event, a decision, or any other system. However, this case must be an instance of some class of cases, identified by the researcher, which constitutes the study’s analytical framework (Thomas, 2011). Hence, it can be argued that Chagnon’s (1983) participant-observation study of an isolated South American tribe is a case study. After all, the particular case here is the culture and social structure of a particular tribal society. While tribal societies, peculiarities of their structures and cultures, and particular approaches to studying them, clearly form Chagnon’s (1983) analytical framework.

However, this is not the whole story. In fact, according to Yin (2014), case studies are appropriate for those instances where the research objective is to provide a comprehensive and in-depth description of a social phenomenon (as cited in Maul, 2015). But, what does this “social phenomenon” refer to? According to Yin (2014), there are no clear boundaries between this “social phenomenon” (which is to be explored by the case study) and its context (as cited in Maul, 2015); which isn’t very helpful. So, a different approach is in order.

Thomas’ (2011) literature review, on what defines a case in a case study, notes that a case can be a particular system, program, institution, project, or policy in a “real life” context. It also notes that cases are defined by boundaries around places and time periods (e.g. Germany after World War I). In addition, according to George and Bennett (2005), the studied case must be an instance of some class of cases, identified by the researcher (as cited in Thomas, 2011). And Thomas (2011) refers to this class of cases as a social phenomenon, which must comprise the study’s analytical framework. 

Hence, it appears that Yin’s (2014) “social phenomenon” is not the case to be researched by the case study, as may at first appear. Instead, it is simultaneously a class of cases, to which the case to be studied must belong, and an analytical framework of the study. However, it can still be argued that there are clear boundaries between most classes of cases/analytical frameworks and their contexts; if only because, otherwise they would not be identifiable.

References

Chagnon, N. A. (1983). Yanomamo: The fierce people (3rd ed.). New York, NY: Holt, Rinehart and Winston.

George, A. L., & Bennett, A. (2005). Case studies and theory development in the social sciences. Cambridge, MA: MIT Press.

Maul, J. (2015). Qualitative core designs: Sampling and evaluation of qualitative research. In Grand Canyon University (Ed.), GCU doctoral research: Foundations and theories. Retrieved from http://www.gcumedia.com/digital-resources/grand-canyon-university/2015/gcu-doctoral-research_foundations-and-theories_ebook_1e.php

Stake, R. E. (2005). Qualitative case studies. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (3rd ed.) (pp. 443–466). Thousand Oaks, CA: SAGE.

Thomas, G. (2011). A typology for the case study in social science following a review of definition, discourse, and structure. Qualitative Inquiry, 17(6), 511-521. doi: 10.1177/1077800411409884

Yin, R. K. (2014). Case study research: Design and methods (5th ed.). Thousand Oaks, CA: SAGE Publications.


Methodological Challenges of Narrative Inquiry

Narrative inquiry is a qualitative research design which involves studying human experiences by telling stories about them. In particular, the researcher writes stories out of the information that he/she has gathered (Maul, 2015). In addition, the researcher is encouraged to tell his/her own autobiographical story and tie it to the stories of research subjects. In the narrative inquiry, the most common method of data collection is the interview (Maul, 2015). Chagnon’s (1983) participant-observation study of an isolated South American tribe appears to be (at least in part) a good example of narrative inquiry. After all, this study masterfully intertwines a large number of stories about the studied tribe in general, its particular members, and the author’s adventures and misadventures during the course of the fieldwork.   

However, there is every reason to believe that the narrative inquiry is subject to the introduction of many unintentional and intentional biases, which may be severe. After all, storytelling seems to always involve an introduction of fictional details (if not plot twists) into an otherwise true narrative. In addition, stories, and even faithfully recorded narrative transcripts, run a high risk of providing a description of unique individuals in unique circumstances, making the results of such a study ungeneralizable.

Also, narrative transcripts, even if being a faithful reproduction of the words of informants, can contain fictional elements, sometimes of extreme magnitude. For example, in her study of traditional male transvestites of India, known as hijras, who have a feminine gender identity, one of Nanda’s (1999) informants provided her with an extensive biographical narrative in which he described in great detail how his feelings of being a girl trapped in a boy’s body, started and gradually crystallized from early childhood onward. However, another hijra, who knew this informant for many years, privately told Nanda (1999) that his whole narrative was a fabrication. Instead, his real biography consisted of growing up like an average boy, who became an average man, who married and had children; and only at mid adulthood became transgendered and left his former life behind. Thus, while this disputed biographical narrative may be useful for understanding the collective values and gender identity of India’s hijras; its lack of biographical credibility makes it fairly useless for exploring how the personal values and gender identity of individual hijras develop from their childhood onward; which was, unfortunately, one of Nanda’s (1999) main research questions.   

References

Chagnon, N. A. (1983). Yanomamo: The fierce people (3rd ed.). New York, NY: Holt, Rinehart and Winston.

Maul, J. (2015). Qualitative core designs: Sampling and evaluation of qualitative research. In Grand Canyon University (Ed.), GCU doctoral research: Foundations and theories. Retrieved from http://www.gcumedia.com/digital-resources/grand-canyon-university/2015/gcu-doctoral-research_foundations-and-theories_ebook_1e.php

Nanda, S. (1999). Neither man nor woman: The hijras of India (2nd ed.). Belmont, CA: Wadsworth Publishing Company.


Saturday, March 12, 2016

Critical Thinking, Formal Logic, and Mathematics

According to Ennis (1990), “Critical thinking is reasonable, reflective thinking that is focused on deciding what to do or believe” (as cited in University of Western Sydney, n.d.). While according to Moon (2008), “Critical thinking is a capacity to work with complex ideas whereby a person can make effective provision of evidence to justify a reasonable judgment. The evidence, and therefore the judgment, will pay appropriate attention to context” (as cited in University of Western Sydney, n.d.).

However, both of these influential definitions seem to be too vague and narrow to be useful. In fact, they don’t even make a distinction between deductive and inductive reasoning. Deductive reasoning involves deciding (using any number of fixed rules of deductive logic) what conclusions must follow from a given set of premises/propositions. In a correctly performed deductive thread of thoughts, a conclusion is guaranteed to be true, if the premises from which it was deduced are true. On the other hand, inductive reasoning involves deciding (using few, if any, definite rules) what conclusions may follow from a given set of premises/propositions. Hence, no matter how well an inductive thread of thoughts is performed, the conclusion is never guaranteed to be true, even if the premises from which it was induced are known to be true. Most scientific theories are good examples of high-level inductive reasoning; while most, complete solutions to complex mathematical problems are good examples of high-level deductive reasoning. And this seems to pose a problem for the definitions of critical thinking given by Ennis (1990) and Moon (2008).

After all, solving complex mathematical problems definitely requires “reasonable, reflective thinking that is focused on deciding what to do or believe” (Ennis, 1990). Similarly, solving complex mathematical problems definitely requires “a capacity to work with complex ideas,” making “effective provision of evidence to justify a reasonable [or even undeniable] judgment,” and paying attention to context (Moon, 2008). Anyone who doubts that solving complex mathematical problems, even by simply following the rules/steps developed for their solution, requires all these skills, should only take a look at the following flowchart -  http://www.nature.com/protocolexchange/system/uploads/2626/original/flowchart.jpg?1372325178which graphically and textually represents a mathematical algorithm (i.e. a sequence of steps required for reaching the correct solution) “for the control of complex networks and other nonlinear, high-dimensional dynamical systems” (Cornelius & Motter, 2013). In this respect it is important to note that computers, who only run on algorithms, have long become unrivaled (with regards to speed) in solving complex mathematical problems, by simply following the rules/steps developed for their solution (i.e. the algorithms).

Thus, it seems possible to be considered a critical thinker, according to the definitions of Ennis (1990) and Moon (2008), despite being completely incapable of inductive reasoning (which humans actually use at every corner) and independent thought, like most computers.    

References

Cornelius, S. P., & Motter, A. E. (2013). NECO – A scalable algorithm for NEtwork COntrol. Protocol Exchange. doi:10.1038/protex.2013.063

 

University of Western Sydney. (n.d.). Develop your skills in critical thinking and analysis. Retrieved from https://www.uws.edu.au/hall/hall/critical_thinking

Thursday, March 10, 2016

Is it possible to conduct a full, unbiased and unambiguous study of a topic when the resolution of scientific and/or technological uncertainty is not a research objective?

There are many examples in humanities and the arts where something similar to scientific method (i.e. collection and grading of evidence, attempts to theorize and draw conclusions from it) is habitually used to answer a variety questions. Thus, while such research, does involve attempts to resolve uncertainty on the topics under investigation, these topics are not on the subjects addressed by the natural/social sciences or engineering, while the attempts to resolve uncertainty in these topics, don’t utilize any knowledge from the natural/social sciences or engineering.

Also, research in a number of fields outside of natural/social sciences and engineering, has long been producing results, which are superior in their objectivity and certainty to anything produced by the natural/social sciences or engineering. In particular, derivation of a logically valid proof to any theorem in mathematics (or more broadly, in any system of formal logic), guarantees that this theorem is true in all the cases that it claims to address and that it will always be true. By contrast, scientific theories and empirical research results are frequently discarded, or at least modified, in response to new empirical evidence or development of more sophisticated theories with greater explanatory/predictive power.

However, it is important to keep in mind that research projects in many subfields of humanities and the arts never try to conduct a full, unbiased and unambiguous study” in the first place; if only because the subject matter under investigation is open to a wide variety of interpretation; preventing which would only move us further away from, albeit unreachable, objectivity (since many perspectives would be deliberately neglected).

Moreover, there is an influential body of thought, which argues that artistic research is a unique method of research. And while attempts to outline it are rather complex; it is clear that according to its proponents, artistic practice (i.e. the creation of art itself) is an integral part of artistic research (Borgdorff, 2012, pp. 140 – 173). However, given that all other (“non-art”) academics generally find the concept of artistic research distasteful and confusing, attempts, to describe artistic research as something a lot like scientific research, have been made (Borgdorff, 2012, pp. 56 – 103). In fact, according to Lesage (2009), even artistic practicecan be described in a way more or less analogous to scientific research” (p. 5). Thus, an artistic project, “begins with the formulation, in a certain context, of an artistic problem” (p. 5). This leads to “an investigation, both artistic and topical, into a certain problematic, which may or may not lead to an artwork, intervention, performance or statement” (Lesage, 2009, p. 5). However, if the investigation does lead to “an artwork, intervention, performance or statement,” the artist uses it as a new reference point for looking at the initial artistic problem and its context (Lesage, 2009, p. 5).

References

Borgdorff, H. (2012). The conflict of the faculties: Perspectives on artistic research and academia. Leiden, NL: Leiden University Press. Retrieved from https://openaccess.leidenuniv.nl/bitstream/handle/1887/21413/file444584.pdf?sequence=1


Lesage, D. (2009). Who’s afraid of artistic research? On measuring artistic research output. ART&RESEARCH: A Journal of Ideas, Contexts and Methods, 2(2). Retrieved from http://www.artandresearch.org.uk/v2n2/pdfs/lesage.pdf

Validity, Reliability and Qualitative Research

In research, a reliable measure is the one that gives the same result over and over again (assuming there is no change in what is being measured) (Trochim, 2006a); while a valid measure is the one that gives the correct value (assuming there is such a thing as a “correct” value for the given measurement) (Trochim, 2006b). However, according to Cronbach (1975), all phenomena, even those that can be quantitatively measured, will, sooner or later, change. This is especially true in social and behavioral sciences, where (1) the studied phenomena change very rapidly, and (2) it is often impossible to tell whether two different measurements of the same phenomenon were made under identical conditions; because isolating the variables of interest, from all external influence, may be impossible (Cronbach, 1975, pp. 122-123; as cited in Lincoln & Guba, 1985, p. 115). Those social and behavioral phenomena which are primarily studied through qualitative methods (often because it is hard to study them through quantitative methods) are especially prone to such unpredictability. Hence, whatever is being “measured,” through qualitative research, is usually constantly changing. And, as a result, there is no way to known what its “correct” value is at any one time; because this value is also constantly changing.  Therefore, when it comes to qualitative research design, the concepts of validity and reliability seem to be inapplicable.

References

Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30, 116-127.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.

Trochim, W. M. K. (2006a). Theory of reliability. Retrieved from http://www.socialresearchmethods.net/kb/reliablt.php


Trochim, W. M. K. (2006b). Reliability & Validity. Retrieved from http://www.socialresearchmethods.net/kb/relandval.php