Friday, 23 February 2018

On “Testing vs Checking” and other Oppositions - Dialectics and Deconstruction


Software testing appears to be riddled with contradictory narratives argued from opposing sides. We argue about what is "testing" vs "checking", "traditional" vs "modern" testing approaches, "scripted" vs "exploratory" and make arbitrary distinctions to argue about "automated" vs "manual" testing. In these cases, the opposition is not just based on semantics and meaning but also on value (i.e. that one is wholly “better” than the other). Despite efforts of testing practitioners to debate these topics online or at conferences it is arguable that the arguments on either side are not significantly more sophisticated than before and no more likely to be resolved by consensus. In the second of my blogs touching on postmodernism and testing, I look at the nature of oppositional debate and touch on why that could be.

I also, through concepts developed by the philosophers Hegel and Derrida, theorize that we can question the oppositions above when we see them in a text by a process of dialectic and  deconstruction to develop more sophisticated, contextual and progressive propositions.


On Dialectics, the study of Discourse


The reasoned discourse between two opposing arguments being debated is known as a dialectic, a philosophical term dating from the classical Greek period of Socrates and Plato. Among the earliest noted examples are the Socratic dialogues, where statements are clarified via questioning and their logical consequences studied until a contradiction arises that refutes these statements. Socrates' aim was to find truth, therefore he had no qualms about changing his arguments to further that aim. It is closely related to rhetoric (the public defending of beliefs and attacking opponents) but seen as a counterpart to it (notably by Aristotle). In the middle ages it was included within the teaching of Logic.

In modern times the dialectic was revived by the great German philosophers JG Fichte and later, GWF Hegel. Based on ideas by Immanuel Kant, Fichte described the dialectic as the triad Thesis-Antithesis-Synthesis (later misattributed to Hegel), with the definitions below -

• Thesis - Some Statement or Proposition

• Antithesis - A statement that negates the Thesis

• Synthesis - A process in which the Thesis and Antithesis are reconciled to form a new proposition.

In Fichte’s definition, the Synthesis could be used as a Thesis in a future dialectic. The continual synthesis of thesis and antithesis in a chain would constitute more sophisticated understanding and progress.

Credit to https://questians.wordpress.com/2010/12/14/a-dialectical-approach-to-the-bible/

Example 1: Hegel's Being and Nothing


Hegel's book "Logic" provides the following example regarding existence.

•Thesis - Existence must be posited as pure Being

•Antithesis - Pure Being, upon examination, is found to be indistinguishable from Nothing

•Synthesis - When it is realized that what is coming into being is, at the same time, also returning to nothing (in life, for example, one's living is also a dying), both Being and Nothing are united as Becoming.

Example 2: Marx’s Theory of History


Marx and Engels took the Hegelian dialectic from a context purely of ideas to a materialist context (that of the real world of production and economics) to denote how they saw change and progress would develop in societies. In Marx’s theory of history, the thesis and antithesis were different classes within society and the synthesis would be societal transformation resulting from inevitable conflict between these classes.

Image result for marxist dialectic
Credit to https://www.pinterest.com.au/pin/346073552589181632/

Each of the above involves some overcoming of the negation (antithesis), resulting in keeping the useful parts of an idea to develop a more sophisticated proposition (which Hegel called Aufhebung or Sublation). Hegel stated that this "negation of the negation" would result in incorporating the opposing idea into itself. In doing so this was the foundation of the Hegelian dialectic, providing a counter-approach to the skeptical Platonic dialectical method of “reducto ad absurdum”, where following a contradiction the thesis would be rejected, leaving us with nothing. Hegel believed that contradictions were a necessary outcome of reasoned argument, thus only a synthetic approach would bring us to a complete truth.

Semiotics - A Dualist Notion of Words and Signs


Later in the 1800s, Ferdinand de Saussure, the so-called Father of Linguistics, proposed a structured interpretation of the use of words, signs and utterances. He called the word or phrase used the "Signifier", and the concept or object referred to by the Signifier the "Signified". His great contribution was to note that the Signifier did not necessarily have to have a connection to the Signified - this is completely arbitrary and the brain acts to combine the signifier word to the signified concept based on some customary or agreed relationship. In doing so, the brain synthesizes and provides meaning to the sign. This is a sort of application of the Hegelian Dialectic. The study of the relationships between and use of signs and words as described above is known as Semiotics. For the purposes of this essay, “word” and “sign” will be treated as synonymous.

How can a word and its meaning have no actual connection? Saussure made the case that words only have meaning relative to other words. Therefore the word "tree" only has meaning when distinguished from the words "bush" or "shrub" - only having use as part of an overarching structure of synonyms and antonyms. Also the relationship between Signifier and Signified changes over time, such that some words become dated as circumstances change. In this respect, Saussure was considered a pioneer in the field of "Structuralism".

Derrida - Spot the Différance

Jacques Derrida

In 1963 the great French Postmodern philosopher Jacques Derrida took de Saussure's ideas to the next level. He stated that signs (including words) can never in themselves denote what they mean. They can only do so by deferring to a potentially endless chain of other words (signifiers) that they differ from. Also our understanding of word meaning changes with each reading and over time with the changing definition of existing and new introduction of new words.

This results in what he called "espacement" or spacing, stating that this differentiation causes binary oppositions and hierarchies.

Derrida coined the term "différance" (sic), a deliberate misspelling to denote not just the difference between words but the concept of hierarchy and deferral (to defer in French is "differer"). It includes the fact that the word can only have meaning within the context of the text ("there is no outside text").
According to Derrida, this chain of signifiers will never get to a point of a complete meaning that transcends context and is understood at all times and by all parties (the so called "transcendental signified", used in philosophy as an ontological argument for the existence of God).

However, he pointed out that the opposition of words and signs with others express not just meaning but values.

"On the one hand, we must traverse a phase of overturning. To do justice to this necessity is to recognize that in a classical philosophical opposition we are not dealing with the peaceful coexistence of a vis-a-vis but rather with a violent hierarchy. One of the two terms governs the other (axiologically, logically etc.) or has the upper hand.

To deconstruct the opposition, first of all, is to overturn the hierarchy at a given moment, To overlook this phase of overturning is to forget the conflictual and subordinating structure of opposition."

Binary Opposition


Many word pairs in binary opposition have not just logical but also value oppositions depending on their use. Examples include "Christian" and "Pagan", "Left" and "Right" (philosophical/political sense), "Speech" and "Writing", "reality" and "fiction" and "reason" and "passion". Different authors and readers at different times will value one over the other. Depending on their understanding by the reader, their definition at the current time and their use in the text, the differences in meaning and value between the two are variable and changing. Derrida called these discrepancies "traces", thus stating that a word not only denotes its own meaning but, by the its partner in binary opposition, what it does not mean (which he referred to as the "absence of the present").

This "deconstruction" was the ongoing practice denoted by Derrida for all texts to be studied for conceptual and temporal oppositions and these broken down to develop new meaning. It formed his life's work and is heavily influential in postmodern philosophy, law, linguistics, LGBT studies, psychoanalysis and literary theory. Note that the structures of word opposition remain and are required for any sort of communication, however the act of deconstruction allows for them to be broken apart and the gaps between the implied sense of each word in opposition to be analysed and understood within the context of the text.

It must be stated that according to Derrida, unlike Hegel's Dialectic, these oppositions (Thesis and Antithesis) are irreducibly complex and unstable. They could be understood and their différance noted and determined, however they could not be synthesised into a whole as Hegel proposed.

"The oppositions simply cannot be suspended once and for all. The hierarchy of dual oppositions always re-establishes itself. Deconstruction only points to the necessity of an unending analysis that can make explicit the decisions and arbitrary violence intrinsic to all texts."

The above ideas are controversial and have been disputed (particularly by John Serle and Alan Sokal) however they can be used as a model to approach the contradictions and arguments stated in my introduction.

Applications to Testing Discourse



By applying the concepts above we can make assertions about the oppositions so heavily debated in testing. The pairs "testing" vs "checking", "scripted" vs "exploratory" and "automated" vs "manual" testing, in the context of how they are usually debated, are treated as binary opposites. As stated in the introduction, arguments around them also imply not just logical or semantic opposition but value opposition (i.e. that one is "better" or more worthy than the other).

An Example - Testing vs Checking


Let us take a look at the first example, “testing” vs “checking”.

Personally I would define “testing” as a systematic exploration and investigation of the application, less algorithmic and more a constant thinking process. I would equally define checking as a more systematic procedure of experimentation and comparison of a scenario outcome with some expected result or test oracle, then moving on. However my definitions are meaningless without appeal to the implied meanings of the other words in my propositions above - and the words they are related to. I also admit that the simple words “testing” and “checking” are probably inadequate in explaining what I had in mind. In taking sides or commenting on a debate of “testing” vs “checking” it may be that I state or imply a hierarchy of value - i.e. that “testing” is better (or worse) than “checking”.

"Testing" defers meaning in its relation to other words such as "exploration", "analysis", "scrutiny", "thinking", "investigation" - which themselves can only be defined by relation to others. Equally the word "checking" also only has meaning in its relationship to other words such as "control", "find out", "learn", “comparison” - each deriving meaning by relation to other words. At some point the "web of language" (another term defined by Derrida) may overlap and both words may derive meaning from at least some other common words, however this does not necessarily mean that they are the same.

They are (contentiously) defined by what they are not. "Testing" is "not" "checking" (same as "Automated" is not "Manual" etc). Their meanings cannot be determined outside of the context of the text they were written in, the time they were written and the fickle views and understandings of the writer and reader. No transcendental meaning exists outside of this.

These represent a dialectic, and from a Hegelian perspective an effort should be made to synthesize them into a whole. However a deconstructionist perspective would dictate that these differences are necessary (at least until some new words are developed with more sophisticated meanings) and irreconcilable and contain an inherent violence of meaning and value. This may explain why after many years we are still having angry debates online and at conferences about these oppositions.

So what can be done? The way to address this is to deconstruct each use of the words in their context and time.

• For example, when we see the word "testing", does this mean a specific incident of testing or the general practice?

• Instead of testing, what other words can be used to describe what the practitioner is doing? Is the word "testing" adequate?

• Who is apparently doing the "testing" - one person, a machine, a group of people, a context driven tester?

• When was the text written and how was testing defined at that time?

• Can a single word apply to all investigative approaches in every company? What does the author think of testing and what other words are used to describe it? What about the reader?

• Are there instances where "testing" and "checking" could mean the same thing? What may these be?

• Regarding the "value" of testing vs checking, who says that one is better than the other? How is it, and under what circumstances can that hierarchy be overturned?

• What "testing" are they referring to, based on the questions above, and in what context? Has one always been better than the other?

What can those who engage in the debates do to clarify the points above to the reader? One way is to publicly state clear and sophisticated definitions of “testing” and “checking” as one sees them. James Bach and Michael Bolton, in their co-written 2013 article “Testing and Checking Refined”, go to great lengths to do just that - along with their definitions of related words that would fall within each one’s “web of language” (such as evaluation, exploration, learning etc.). Also included are narratives of the definitions along with various implications.

“Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.

(A test is an instance of testing.)

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.

(A check is an instance of checking.)“

However, while this is better, from Derrida’s perspective the extensive web of language behind these definitions and their related words, in the specific text and time they were written in, will always prevent these definitions from being perfect and unchanging, the basis of a founding philosophy of software testing.

Michael Bolton, in a follow-up piece on his blog, also attempts to take the hierarchy of value out of the debate. He makes it clear that his (and James’) side of the debate is to reconcile checking in a more complex definition of testing that embodies but distinguishes between the human, reflective and thought driven tasks and algorithmic/mechanical and process driven tasks.

“I would like to emphasize our goals here. Our purpose is not to denigrate checking, nor to disparage the use of tools, nor to deplore those people who are asked to do human checking. On the contrary: we’re attempting to deepen our understanding of our craft; to show that checking is deeply embedded in testing; to emphasize that tools and the skilled use of them are essential to our work in many ways; to realize that humans will always inject human elements into the things they do; to realize the value of those human elements and the risks involved in asking humans to behave like machines. We must be clear on the differences between what humans do and how our processes and tools—media, as McLuhan would call them—do. Or more accurately, the differences between what we do and how our tools affect what we do.”

This removes the opposition entirely by making checking a subset of testing - arguably a successful Hegelian, synthetic approach.

Epilogue


The questions posed in the example of the binary opposition “testing” / “checking” can be applied to texts containing the other contentious testing oppositions mentioned.

The process above is necessary, if not to remove or synthesise oppositions out of the picture, to create propositions and assertions with more contextually accurate and sophisticated meanings. With the debates we are currently having, unless we deconstruct the concepts we are using in the texts they are written in we are arguing over shadows. Then can we make headway in some of the tense binary oppositions that have dogged the testing profession for many years.

Useful Links and References









James Bach, “We use tools” http://www.satisfice.com/blog/archives/1598


Jermais Rossler, “Testing vs Checking, so what?” https://medium.com/@roesslerj/testing-vs-checking-so-what-9eb4c97c166c

James Bach (cowritten with Michael Bolton), “Testing and Checking Refined” http://www.satisfice.com/blog/archives/856

Michael Bolton “On Testing and Checking Refined” http://www.developsense.com/blog/2013/03/testing-and-checking-redefined/