Total Pageviews

Thursday 7 July 2016

Is Translation of Theories of Learning into Theories of Teaching Necessarily or Incongruently Incommensurable?

It is by avoiding the rapid decay into the inert state of ‘equilibrium’ that an organism appears so enigmatic; so much so, that from earliest time of human thought some specific non-physic or supernatural force (vis viva entelechy) was claimed to be operative in the organism, and in some quarters is still claimed. How does the living organism avoid decay? The obvious answer is: by eating, drinking, breathing and assimilating… What then is that precious something […] which keeps us from death? That is easily answered. Every process, event, happening – everything going on in nature means an increase of the entropy of that part of the world where it is going on. Thus a living organism continually increases its entropy – or as you may say, produces positive entropy – and thus tends to approach the dangerous state of maximum entropy, which is death. […] the essential thing in metabolism is that the organism succeeds in freeing itself from all the entropy it cannot help producing while alive- Edward Schrodinger, What is Life? 1958

How fortunate I have been over the past two days to attend a Professional Development session run by three outstanding academics in The Science of Learning. This series were run in conjunction with the outstanding work being done by educational researchers at The Science of Learning Centre in Melbourne, Australia and other affiliates. It was a brilliant series, and I could give you the specifics. But it all boils down to one point: they utilise both the science of learning and the expertise of teachers congruently to better refine and design educational frameworks – they collapse the distinction between lab and classroom in recognition of the complexities that arise. These complexities are not ideological, but rather a necessary consequence of the nature of reality. I will explore this a bit later on, but for now it can be presented as thus: the transfer between the science of learning and its practical application in the classroom is unequivocally complex.
My prior blog entries really form the foundation for the argument I am about to present; if you have not read them I suggest doing so as they will give an indication the presuppositions I utilise in order to explicate my theory. This blog presides on the presumption that: firstly, there is a current paradigmatic shift in the methodology of educational research that can be understood as evidence over intuition or neuroscience over folk psychology; secondly, that neuroscience will necessarily explain the science of learning through Reductionism/Eliminativism; lastly, that transfer of the science of learning into teaching strategies is not a logical inference but a deceptive one that exploits the science of learning in much the same manner that behavioural economics has done.
All this sounds very complex but let me add a simple point. Everyone, teachers more importantly, already have an inbuilt strong pre-theoretical understanding of how learners learn and, thus, how to teach learners. The strength of our intuitions preside (and can be reduced to) in our inheritance of selected traits that are conducive to the survival of the species. Our ability to thrive and evolve as a species can be explained by the success of our ability to teach. Explicating these pre-theoretical beliefs almost violate the know-how. How often we hear teachers say “I already do that”, or more generatively, “I now know I do that but it never occurred to me why I was or why it was working”.
As pointed out at the Science of Learning conference, chefs don’t need to understand the science behind taste in order to cook delicious meals. Chefs already have an inbuilt understanding of what constitutes a good meal. Any science of taste that was presented to the chef would only validate his knowledge. We could argue the same for teachers: we already know how to teach without knowing the science behind the learning. This is a valid analogy – but it misses a key point. The science of learning is so important for education because there are so many misnomers in education that disseminate cyclical progress. Our desire to better understand the conditions for the possibility of learning will better equip us with being able to discern between theories that have negative to no effect on rates of development, and those which are most conducive to producing educational outcomes (Hattie’s Statistical work here is crucial). What we are really concerned with is efficiency: our desire to produce outcomes (learning) through inputs (teaching) is guided by the necessity for efficiency and efficacy. Even educational research is not exempt from this fact: physics fixes all the facts.
In a brilliant article A Bridge Too Far – Revisited: Reframing Bruer’s Neuroeducation Argument for Modern Science of Learning Practitioners, Horvath and Donoghue (2016) argue that translation of neuroscience into strategies is impossible due to a limitation inherent to all scientific research: you cannot bridge the gap between non-adjacent compositional levels of organization. More simply, you cannot define the biological processes conducive to survival byway of explicating the laws of physics – you must first translate the laws of physics into the laws of chemistry, and from there you can translate the laws of chemistry into the laws of biology. In order to build a cohesive explication of phenomena at any level of reality (or level-of-organization) and make predictions (or build probabilities of phenomena) about what we would expect to observe, we can only do this by understanding the preceding level-of-complexity. In the context of education, the level-of-complexity is constituted in the psychological and mediated through our folk psychological and phenomenological levels of reality. If Horvath and Donoghue are correct, and I believe they are, we cannot form a bridge between neuroscience and phenomenology as it negates the mediating level-of-complexity of biology, irrespective of the fact that it is constituted in the phenomena itself. Neuroscience simply cannot make any predictions about what we might expect to observe in the classroom at any moment in time even though the phenomena it studies is a determinant of that phenomena.[1] This leaves us with two options: disregard Neuroscientific research as it cannot allow us to successfully build theories of teaching, or reduce our everyday familiar folk psychological concepts of learning to Neuroscientific concepts and make predictions within this level-of-complexity.
Insofar as translation is necessarily incommensurable (as seen above), Horvath and Donoghue make an excellent distinction that is important in addition when considering translation. Firstly, what generally concerns educators is a prescriptive translation that attempts to ‘instruct an educator and learner on what to do and how to do it’ – a process many claim to have achieved. The second translation would be a conceptual one whereby neuroscience is conceptually instrumental in explaining the phenomena of learning – something I hope more educators will take seriously through becoming better informed about the science of learning research, but researchers will invert insofar as folk psychological concepts need be eliminated in order to accurately explicate theories of learning – folk psychology is instrumentally useful insofar as it provides us with a familiar and intuitive conceptualisation but not epistemically useful when it comes to making claims about the science of learning. Thirdly, a functional bridge allows educators to better understand the implications of neuroscience on learning – whether individually (for example, for students with developmental issues or brain damage) or, more broadly, as a species in general. Lastly, a diagnostic bridge allows educators to deduce from observation why phenomena may be the case when reduced to Neuroscientific levels-of-complexity (Horvath & Donoghue, 2016, p. 2)  – something we are ill equipped to do and the reason why we work with professionals to better understand our students .
So far we have a nice linear picture of reality. Unfortunately, levels-of-complexity also produce unpredictable outcomes. As Horvath and Donoghue (2016) note, prescriptive translation is undermined by emergence, ‘a process whereby novel and coherent structures, patterns, and/or properties arrive at ascending levels that are not exhibited within or predictable by preceding levels’ (p. 3). If we associate a bridge with the means by which translation occurs, translation itself becomes only a means by which probabilistic outcomes can be inferred and also it must adhere to unforseen probabilities byway of emergence.
For a procedural bridge, good theories of learning based on Neuroscientific principles: may or may not not work in the classroom; may or may not work for some students; may work successfully and unsuccessfully for some; might work with varying degrees of success depending on different causal factors etc. For conceptual translation, it remains highly speculative and unable to epistemically justify its assertions without further reductionist investigation – investigations that may undermine or contradict initial speculations but still be consistent with the phenomena. Conceptual translation is permissible insofar as it provides approximations through explanatory power and predictive success, but it must necessarily carry a reductionist perspective and be open to falsification. In this manner it is instrumentally valuable but devoid of epistemic certainty. Thirdly, a functional bridge may show the causal associations between the levels-of-complexity, but these complexities are subjected to determinants of probabilistic outcomes – once again, instrumentally valuable but devoid of epistemic certainty. Lastly, a diagnostic bridge would still be subject to uncertainty – think of how often psychiatrists misdiagnose and mistreat patients using the same bridge.
If we are troubled by these conclusions then we appreciate the complexities inherent. But you may be thinking that this all begs the question – how come teachers so often get it right? How can teachers successfully teach and learners successfully learn if we are so constrained by probabilistic outcomes and emergent properties? Do teachers possess some biological aptitude for understanding how learning occurs - do they in some way have more accurate expert intuitions about learning? I think so, but with major restrictions – they are confined to folk psychological conceptions of learning and are not gifted with the intuitive capability to reduce phenomenological phenomena to lower levels-of-complexity. More simply, teachers can determine with high degrees of certainty what works and does not work, but cannot claim why it works without resorting to folk psychological concepts to explain. We are confined to folk psychology in order to explain learning as it exists on our level-of-complexity. But, as I have stated in the past, Folk Psychology cannot tell us why things work. Simply, our ability to understand learning is concerned with our desire to help students learn and not explicating the science behind the learning process. Expert educators have expert intuitions, which have instrumental value, but are devoid of epistemic certainty.
Are we left with any certainty? It appears we are only left with probabilistic outcomes. There is a reason why translating neuroscience into educational frameworks cannot determine learning and it comes down to the probabilistic nature of reality – after all things have been reduced, all we are left with is uncertainty guided by entropy. All the science of learning can do is improve the probability that learning will occur insofar as it designs or implements teaching and learning experiences that are most probably conducive to learning. Any translation that occurs is necessarily subject to probabilistic outcomes – wether linear or jumping between levels-of-complexity. Why? Because it is built into the laws of physics. Horvath and Donoghue are right insofar as they claim the bridges are incommensurable between levels-of-complexity, but they don’t go far enough. Even inherent bridges between the same levels-of-complexity do not produce necessary outcomes. Any open system provides only probabilistic outcomes from one moment to the next. The one thing that unifies all levels of complexities is not a thing, but rather a process: that process is entropy. One thing that scientific research has taught us is that the working of any organism, or the condition by which makes any organism possible, is exact physical laws. To draw an analogy, if translation is the bridge between levels-of-complexity, entropy built that bridge.[2]
All things on all levels-of-complexity are guided by entropy. The causal link between levels-of-complexities is entropic. The levels-of-complexity for, let’s say, the human body, form a cohesive structure guided by entropy on each level. Each level-of-complexity can be thought of as an open system that tends towards disorganised states due to statistical probabilities. At the atomic and molecular levels-of-complexity, entropy is a measurement of order/disorder of its particles; at the cellular level in humans, the process of borrowing energy from highly ordered structures happen through mitochondria. Vital organs are made up of cells that feed off energy through metabolism; this process is necessary to keep organs alive, and the decline of organs can be reduced to the increases in systemic molecular disorder. A neuron in the brain is a great example of an open system insofar as it exchanges and transmits energy with its environment – a neuron is an entropy processing mechanism.
The theory of emergence and its link with entropy is important. Open systems (whether a cell, an atom, a human or a society) at all levels-of-complexity are not made possible by discrete subsystems or components – they are made possible (and their functions made probable) by a composition of a multiplicity of parts unified by a set of well-defined relationships that determine the scope of probabilistic outcomes. Everything is subject to this: our classrooms are not made possible with the multiplicity of parts – students, teacher, classroom, equipment, and lighting, heat, everything – and cannot process output without well-defined relationships. These subsystems (parts) are engineered in such a way that is most conducive to learning, but they necessarily can also lead to unexpected or emergent results when unified under a higher level-of-complexity. We cannot look at the parts and determine an outcome. Well defined relationships between parts constrain what is made possible, but is still no determinant because when the parts work together to create a new level-of-complexity. Though the parts and relationships are constituted in the new phenomena, they were only the conditions for the possibility of emergent properties. Having said that, emergent properties must also adhere to entropy, even if they appear to contradict it in the formation of more complex and ordered states.
This takes us beyond our simplistic reductionist picture of reality by which reductions can account for and give rise to causal relationships between levels-of-complexity. We now know that, whilst each level-of-complexity is the condition for the possibility of the next, its casual chain only defines probabilistic outcomes and the outcomes are also able to develop emergent properties that could not have been made probable at the preceding level-of-complexity.  
We could end the story here and resign to the fate of the incommensurability: due to the emergence of unpredictable phenomena that are irreducible to lower levels-of-organisation. We could argue the futility of designing theories of teaching based on theories of learning as learning occurs at Neuroscientific level of complexity whereas theories of teaching occur at a psychological. Nonetheless, we know there is a correlation between the science of learning and theories of teaching, and we can predict with a great degree of success if students will learn. The question is whether we can prescriptively design theories of teaching that are translated from neuroscience? According to Horvath and Donoghue, the answer is no. But what if we change the language by which the prescriptions are given. What I am saying is that in order to design a theory of teaching that is translated from a theory of learning, we necessarily have to reduce the theory of teaching (which is at our level-of-complexity) to the level-of-complexity of the science of learning. In effect, we are to do with education what Churchland has done with Eliminativism of Folk Psychology; destroy the way in which we talk about it and understand it i.e. reduce folk psychological concepts of learning to Neuroscientific concepts of science.
It would look something like this: if by ‘Tim has learned well’ we really mean ‘Tim’s neurons have built dendrites and strengthened neural pathways byway of releasing neurochemicals into the synapse between the neurons’, then we are correct in stating Tim has learned. Learning, in this way, has undergone a reduction to be more accurately explicated. Only when we understand this reductionism better can we more accurately build theories of teaching. The question becomes not if translation is possible (because we know it is not as it is incommensurable), but which theories of teaching better enhance or provide the conditions for the possibility of the science of learning to take place? Because theories of teaching are grounded in folk psychology and communicated or embodied through our social interactions with the world, it is the only vehicle by which learning can, and does, occur. Teachers and researchers would utilise folk psychology as instrumentally useful, and neuroscience that provides epistemic certainty, because we would be taking the science of learning behind phenomenology seriously. Because levels-of-complexity build a cohesive scale of reality, with indeterminate and emergent properties considered, there is a fundamental correlation between our folk psychological concepts and the neuroscience behind it – the question is, how do we design theories of teaching that exploit it to maximise educational outcomes? We know this is possible. After all, we have been successfully teaching and learning as a species for eons irrespective of an inbuilt incommensurability. To enhance this, we don’t need to translate up levels-of-complexity because we can reduce folk psychological concepts to neuroscience in order to produce a prescriptive translation that then is mediated through folk psychology. This, in turn, is conducive to our survival (folk psychological concept), as its function is to assist in momentarily deferring decay by borrowing negative entropy from our open system (reduction to entropy). We have a cohesive picture irrespective of the incommensurability. Why translate up when we can reduce down; turn a weakness into a strength.
Learners and learning is no simple process. It is vastly more complex system than our folk psychology can intuit – but intuitions were not designed to make sense of reality, only to assist in inheriting, utilising and reproducing traits that are conducive to biological principles that are fundamentally aimed at slowing down the corrosive yet inescapable decay of order into disorder. Science of learning translated into theories of teaching ought to be to a new area of research that studies the correlation between the phenomena of each level-of-complexity and reduces folk psychological conceptions on learning (which we intuitively understand) to well defined scientific theories of learning (that only the scientific methodology can disclose). The Science of Learning Centre ought to be commended for building partnerships between teachers (whom have expert intuitions about what works and does not work in the classroom) and researchers (who can explain, or are starting to understand the science behind the phenomena of learning). This is new and exciting research and will have repercussions for the way we view learning and teaching.

Works Cited

Horvath, J. & Donoghue, G., 2016. A Bridge Too Far - Revisited: Reframing Bruer's Neuroeducation Argument for Modern Science of Learning Practitioners. Frontiers in Psychology, 16 March, 7(377), pp. 1-12.

Schrodinger, E., 2015. What is Life?. 17th ed. Cambridge: Cambridge University Press.



[1] But wait… what about all those claims (even those ‘evidence-based claims’) about Neuroeducation? It’s for these reasons I have claimed that the education system is rife with exploitative and profit driven agendas. Quite literally, if we take the science seriously then it necessarily undermines the claims of Neuroeducators. But we have already established this point – it’s time we go deeper.  
[2] Rather fittingly it will also be the process that erodes it – but thus the literal has violated the metaphoric

Saturday 2 July 2016

Incommensurable Frameworks: Translating Theories of Learning into Theories of Teaching

Squaring the Circle: Correlation Does Not Equal Causation
How can we square the circle between competing ideologies in education and the science of learning? The convergence is no easy feat – ideologies prejudice and evolve over time and are guided by vested interest; scientific findings and the truth of reality does not (or should not) discriminate at its own convenience, nor does it have to conform to the reality in which we desire. If understanding the truth about the science of learning is our objective, then it stands to reason that ideologies must take a backwards step in the decisions made about education.

I recognise that some people are of the opinion that scientific research is also ideological. On this I make two points. Firstly, as noted above, the science of learning in epistemically concerned with truth. As such, it ought to be subject to the same rigorous and systematic testing of any scientific research; it must also and also be open to falsification. Secondly, and more importantly, scientific research differentiates on the basis of methodology. I have touched on this point in several of my past blogs. These two ideas are not distinct but rather inform one another. If this distinction is correct, and I believe it is, then it is imperative to understanding the paradigmatic shift currently underway in educational research - from intuition to evidence. Falsifiability requires a methodology and strict criteria that gives credibility to ideas – these ideas, whether born out of intuition or experiment, must provide evidence, explanatory power, predictive success, and survive counter-evidence that would otherwise undermine its credibility. Generally, educational research has not upheld this rigor. This is indicative of the fact that education has been a hallmark of social scientific study and not scientific study itself. In other blogs I have shown how this has been exploited in the education system.  

The fact remains, we must begin with the science of learning and build theories of teaching to compliment. These theories of learning cannot be ideological, nor can they be credentialed on the basis of intuitive based justifications – they will claim truth, but they lay no bearing on providing the necessary evidence it takes to claim something is true. This is in the abstract – let’s look at something more concrete.
A major paradigm in educational research has been Constructivism. Fundamentally, constructivism provides a theory of learning which suggests that humans construct knowledge based on an ‘active creation’ between prior knowledge and new experiences in the world. Constructivism has provided a gamut of theories of teaching that translate the theory of learning into an applicable theory of teaching. Just think of repercussions that this theory of learning implies: it see’s teaching of knowledge transfer as futile insofar as information can only be subject to interpretation; it advocates for teacher as ‘facilitator’ and dismisses ‘chalk and talk’ or direct and explicit instruction; it asks for ‘minimal guidance, ‘problem-based’, ‘real world’, ‘experiential’, ‘Inquiry’ teaching; it helps market and provide evidence for a whole range of educational products. Now I am in no way saying constructivism has no place in education – I hope to establish the opposite in good time. But it cannot lay claim to providing an explanation of the truth behind the science of learning, and it cannot do this for one very good reason – it uses intuitive justifications as explanations of how learners learn. I will put it another way: constructivism uses our biologically driven theory of mind to attribute intentionality and mental states to other people – just look at the discourse of ‘active agent’ and ‘meaning maker’ and so forth. This discourse is all based in Folk Psychology, and as I have stated previously, folk psychological principles must be reducible to scientific principles in order to provide an accurate and objective account of the science of learning. So is constructivism consistent with scientific findings? Here is where it becomes rather complex.
Constructivism as a theory of learning is highly unscientific. All the folk psychological principles outlined above cannot be reduced to scientific principles: ‘meaning maker’ and ‘active agent’ are not elusive but, rather, illusory. According to Eliminativists, this means the concepts need to be abolished. But we cannot forget that we are in the business of interacting with human beings – human interaction is not epistemically concerned with truth but is concerned with all the wonderful experiences and connections we make with our students and the joy that brings to our lives. Our students, to us, are not (insofar as we interact with them and they experience the world) merely neurons and electro-physical interactions. We attribute them with mental states for three good reasons: firstly, it is biologically determined that we do so; secondly, because we ourselves experience in a manner which folk psychology describes with a high degree of accuracy and we attribute the same experiential phenomena to our students through empathic relationships; lastly, because students experience the world and have intentions, hopes and desires. Irrespective of the truth behind the science of learning, we can never square the circle between the reductionist truths that science discloses and the folk psychological ideologies that give meaning to our lives and define our experiences in the world because the two are necessarily incommensurable.
On the other hand, there is plenty of ‘evidence’ to suggest that constructivism is conducive to educational outcomes. Whether or not it is scientific in its explanatory power, it can still provide a means of facilitating learning and conceptual growth in students - It is not an inhibitor to learning and as a theory of teaching is widely used in order to achieve educational outcomes. How can this be so? How can it be a successful theory of teaching without there being any credible evidence for it? It’s simple: constructivism is a good theory of teaching but a false theory of learning. Let’s explore this in more detail.
Correlation does not equal causation. So often does this provide an answer to seemingly apparent paradoxes and contradictions that it seems almost trivial as a response. Alas the correlation between constructivist theories of teaching and the achievement of educational outcomes cannot, in itself, be the cause of those educational outcomes. In other words, the constructivist theory of teaching itself is not the cause of the educational outcomes but approximates something that can be reducible to a scientifically understood principle of learning. Surely we arrive at misnomer? But this must be true as in the case that all claims to truth must be consistent with the laws that govern everything in this universe. So what next?
Even though students are not ‘active agents’, their experience in the world tells them otherwise: their efficacy (science of learning) in achieving tasks relies on the feeling of agency and independent inquiry (feeling of meaning making); students self-reflect on their learning (feeling of ownership), even if they have far less deliberate, controlled and linear control than it appears they have (science of learning); students feel they are thinking about problems (active agents) even if they are recalling memory and storing reformed memories (science of learning) – these are the narratives that students tell themselves, and teachers tell of students, in order for learning to take place. Despite this, the concepts will always be elusive to science because they are illusory and products of physical interactions in the brain. This is why, when all things considered, we need to build theories of learning and teaching that consider both the phenomenological and the naturalistic perspectives: the phenomenological will never be able to disclose the truth of the science of learning, but by the same token, the science of learning (even through reduction) will never be able to disregard the phenomenology of conscious learning experience, even if it can account for it. Both are needed in order to design better educational systems.
If everything I have stated so far is correct (as a scientific hypothesis it’s open to falsification) then it has a very interesting implication, which I will get to briefly. But my first point is this: I am sceptical of the ease by which translating theories of learning into theories of teaching is currently being done. Not only is it a great way of marketing products as ‘evidence based’, but it seems an overly superficial process (but nonetheless an intuitive one) by which a theory of teaching is designed to superficially embody on the theory of learning – pretty logical and simple. In Homo Economicus I stated:
“For Theories of Learning, we can use Behavioural Economics as an exemplar in the utilisation of naturalism as beneficial for enhancing an understanding of the underlying principles involved in decision making – in our case, educational not economic. For Theories of Teaching, there is an exemplary application of these understandings to enhance economic outcomes, or in our case, educational outcomes. These outcomes, however, come at a contentious and foreseeable cost: they are deceptive insofar as they utilise these understandings to enhance economic outcomes. The great moral question for education, however, would be whether we should use the same deception to enhance learning outcomes. Theories of Teaching, built on these principles, would look vastly different from the way in which we design Theories of Teaching now as we currently aim to provide a transparent and harmonious association between Theories of Learning and Theories of Teaching. Behavioural Economics does not disseminate its theories in manifestly transparent ways in which allow the individual to transcend their impulses – it exploits the consumer on the basis of deception in order to enhance capital gain. I am not sure what is to be taken away from this… perhaps I will return to this at some time in the future.”
If we are to take the science of learning seriously, just as behavioural economics took the science of economics seriously, then we necessarily cannot produce theories of teaching that ‘superficially embody’ the theory of learning – that would disregard the narratives and phenomenological perspectives of the learner which would have catastrophic implications for learning. Take for example Willingham’s thesis that students don’t like school because the brain is not built for thinking. Now this theory, consistent with scientific research, is a theory of learning: students rarely think when faced with problems but rather revert to memory recall as a ‘quick and dirty’ solution that conserves energy best spent on more pressing matters like survival or reproduction. If we built a theory of teaching that ‘superficially embodied’ this theory, not only would it be highly unethical but it would by highly demoralising and counterproductive for producing educational outcomes. It also, might I add, runs contrary to our folk psychological intuitions that we attribute to the learner too – despite knowing that the reduction of what we could perceive as “lazy thinking” is actually optimal and conducive to the survival of the individual, we still attribute the folk psychological principle of ‘disengaged’ or ‘lazy’ because we know that is a sign of ‘lazy thinking’ and ‘disengaged learning’. When communicating to the student it would be of no benefit to say that they are behaving in an optimal way as our measurement of success is learning and educational outcomes. We can’t let the truth get in the way of a good story – if we did it would become an impediment to learning.
So what then? How do we build a theory of teaching based on what we know about the science of learning if it can’t mirror the science of learning? Remember in Homo Educationist I stated economists deceive customers into thinking they have more agency in consumer choice than they really have? And in The Story so Far I referred to psychotherapy and the ‘talking cure’ as a means of ailing psychological conditions despite the correlation between improvement and lack of causation? The same must be the true of learning. In the same way talking deceives patients into thinking they have overcome their ailments; in the same manner behavioral economists deceive their customers into thinking they are making choices about products; teachers should be deceiving students in a manner that nudges them out of their default modes and engages them more deeply in their learning – and I say deeply in the purely folk psychological sense, whatever it means.
The science of learning has disclosed how learning takes place - memory recall is a great example of the same type of deception as noted in behavioural economics. You are not telling the students that their brains are mediating memory readjustment in the hippocampus – you are providing experiences for this to occur and utilising the narrative of phenomenology (or how this physical process appears in consciousness) to allow for this process to happen. I have said it before, and I will say it again: to build successful theories of teaching, we must begin with the science of learning. Only from here can we build theories of teaching that will truly and accurately utilise the science of learning to produce quality educational outcomes. We must overcome our disposition to build theories of teaching based on theories of learning that superficially model and are born out of logical and intuitive inferences. Translation ought not be a logical process but a deceptive one that converges two incommensurable frameworks (the phenomenological and the scientific) in a manner that enhances and compliments the principles that the science of learning has disclosed.


Friday 1 July 2016

The Story So Far

In Homo Educationist I established and methodological framework by which to base educational research on insights from Behavioural Economics. In it I argued that that despite the qualitative differences between economic and educational research, it is justified insofar as both are dealing with the same biological principles and brain processes; the methodologies of research are both naturalistic; both are epistemically concerned with truth and therefore the evidence from both research areas ought to be cohesive. More importantly, both have undergone a paradigmatic shift (a shift) from intuition to evidence as a means of providing an accurate understanding of reality that is in line with brain and cognitive research.
These are not isolated examples: Hattie’s revolution in education in analogous to many other paradigmatic shifts in other fields of research. One may ask why it has taken so long for a paradigm in educational research. In Education is a Science and Not a Social Science I not only set out to argue for a turn towards using scientific methodologies (as necessitated by Hattie’s Visible Learning research) but away from the intuitive based methodologies that fundamentally dominate educational research. I suspect this education revolution has come now because of four reasons: firstly, the corrosive post-modern induced research has relativised claims of truth and fought hard to undermine the validity of scientific research through defamation of claims as oppressive; secondly, education has been a cornerstone of social science research and the majority of education departments in tertiary education worldwide belong to the Social Science and Humanities Departments and, therefore, adopted the above distrust of scientific advance; third, the necessity to misrepresent scientific research in order to market and sell theories of teaching in a manner that is ideologically acceptable and attractive to those who are invested in financially profiting from education, as argued against in What Constitutes as ‘Evidence’ in Evidence-Based Educational Practice? Lastly, the presupposition (and biologically default mode) of humanism disseminates inherent ideologies that subjugate and motivate at every level of the educative system - from the classroom right through to policy. For these reasons I believe the Paradigm has yet to shift, as outlined in Naturalising Education: A New Paradigm?
In Psychotherapy, treatment invokes a troublesome dichotomy: to treat using psychotherapy or through pharmacology? This is not say both can’t be used in conjunction, but they preside on very different assumptions about how the mind works (and by mind I mean brain). The former is much aligned with humanism and the ability for one to overcome through the application of intentionality – much like the infamous ‘talking cure’ of Freudian Psychoanalysis. The latter presides on an interventional approach whereby the patient is subject to chemical inducing drugs that alter physical processes in the brain of the patient. Education has seen many such analogous movements: Behaviourism and the infamous operant conditioning of B.F. Skinners research absolutely removed intentionality and determined outcomes based on contextual influences; Waldorf Steiner’s research implored students to take command of their interests and make sense of the world through discovering and intention. A way of looking at these theories are through dichotomising – as in conjunction with our biologically driven desire to categorise and discriminate on the basis of qualitative differences as spoken about in Homo Educationist. When teachers are taught these theories, and theorists, they are done so in such a manner that becomes polarising and political. Irrespective of this, the complex phenomena of teaching and learning (no matter where they can be found on the ‘spectrum’ of ideologies) can be found in all schools and at all times. Our tendency of simplification through narrating complex phenomena blinds us to the integration of all the spectrum of conjectures in every day and at every moment of human and educative interactions. In the same way even the most biologically astute and pharmaceutically driven therapist will converse with patients to understand phenomenological perspectives, so too will the most naturalistically astute and reductionistically driven teacher interact with students using our everyday phenomenological and intuitively driven default modes of human interaction – we are, after all,  bound by the same biological and physical process and laws that govern everything else in the universe, as explored in both Naturalising Education: Conceptual Foundations for an Evolutionary Educational Psychology & The Explanatory Power of Neuroscience.

This has been the story so far. The moral, you may ask? That what is often perceived as simple is far more complex than our intuitions can give credit to. So what does this mean for Teaching and Learning? That will be my next exploration.