About the Author(s)


Dirk J. Geldenhuys Email symbol
Department of Industrial and Organisational Psychology, Faculty of Economic and Management Sciences, University of South Africa, Cape Town, South Africa

Citation


Geldenhuys, D.J. (2023). Ethics in a digital world: The contribution of applied neuroscience. Journal of Applied Neurosciences, 2(1), a9. https://doi.org/10.4102/jan.v2i1.9

Editorial

Ethics in a digital world: The contribution of applied neuroscience

Dirk J. Geldenhuys

Copyright: © 2023. The Author Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’ (Metz & Smidt, 2023). Digital technologies do not respect boundaries. The internet does not respect differences between cultural values and their respective ethical principles. Technology also does not respect boundaries between humankind and technology, such as cyborgs, artificial intelligence technologies and genetically engineered superhumans as agents of conduct. These challenges stimulate the current debate on the call for global ethics in the digital age (Stahl, 2021). Stahl argues for a systems perspective on global ethics, with the social system as well-developed, but influenced by technology.

Global ethics needs to address the questions of adopting one ethical philosophy at the expense of others, addressing ethical dilemmas and covering non-human agents. Furthermore, a global ethics philosophy also needs to be relevant to our practice as professionals in our day-to-day contact with clients and colleagues. In offering an answer to the quest for global ethics, Kantar and Bynum (2021), based on the philosophy of Aristoteles, offer ‘Flourishing ethics’ as an approach.

The argument offered in this editorial is that applying neuroscience to the ethics debate provides us with a sound meta-theoretical framework that is not only applicable on a global level but also relevant on an interpersonal level. Neuroscience provides us with the foundations of human experience and behaviour but is also of relevance when human beings deal with technology daily. I am thus arguing for a neuroscience of ethics as a living philosophy. Thus, the argument is not about researching or applying neuroscience ethically but rather introducing neuroscience to the ethical debate in a digital era.

The premise of the argument is based on neuropsychotherapy as an application of neuroscience with its emphasis on the vital role of the impact of the environment on psychological development towards well-being or the onset of pathology. Synapses are modified by experience and different environments lead to different expressions of the same gene (Rossouw, 2014). The experience of compromised environments leads to a restructuring of the brain to enhance survival, but this comes at the expense of effective neural development of frontal cortical systems that is necessary for well-being and thriving in all spheres of life (Grawe, 2007).

According to Grawe’s model, life experiences set the trajectory for two different motivational schemata, namely schemata responsible for protection (avoidance-based) and schemata responsible for enhanced neural development (approach-based). He has identified four basic needs that need to be met through the application of approach-based schemata for neural growth and proliferation to happen. However, when these needs are compromised by life experiences, avoidance-based schemata are applied for protection that inhibits neural thriving. Prolonged or severe application of avoidance behaviour leads to the development of an anxious brain and the onset of pathology.

The basic needs of control and orientation are regarded as the most essential needs to be met (Grawe, 2007). Humans always need to experience a sense of control over their environment and the perception of the number of options available to act upon correlates with this experience. Furthermore, meeting the need for orientation, viewed as the ability to assess and understand a situation or to experience clarity about the future enhances this experience of control.

The need for attachment viewed as the use of proximity to regulate fear (Cozolino & Sprokay, 2006) is the first need humans try to fulfil and remains important throughout life. This is, for instance, evident in our need for belonging, trusting relationships, and mutually reliable support systems to fulfill our goals in life.

The need for pleasure maximisation or distress avoidance is related to the release of dopamine in the reward system of the brain. Experiences are neurologically evaluated as either good or bad, with the motivation to maximise good experiences and to avoid or minimise bad experiences (Grawe, 2007). According to Grawe (2007, p. 244), we are in a maximal state of pleasure when our ‘current perceptions and goals are completely congruent with one another, and the transpiring mental activity is not disturbed by any competing intentions’.

The basic need for self-esteem enhancement, (motivated by approach-based schemata) or maintenance (motivated by avoidance schemata), emerges from the other needs. Self-esteem is defined as a person’s subjective self-evaluation of his or her worth as a person (Grawe, 2007; Rossouw, 2014). The development of self-esteem depends on conscious self-awareness and the capacity for reflective thinking. Although these qualities are primarily facilitated by the neocortical areas of the brain and hence are the last to mature, the development of self-esteem is already influenced by life experiences relating to other basic needs (Rossouw, 2013). The motivation to protect self-esteem can, for instance, become visible in fight, flight or freeze reactions and the motivation to enhance self-esteem in displaying curiosity.

Considering the aforementioned, the relevancy of the theory for providing a framework for global ethics in a digital area that is also relevant on a micro-level, is evident. The influence of technology on meeting our basic human needs and our motivational schemata cannot be underestimated. For instance, ethical questions can be asked about the extent to which the development and use of technology assist humans to experience control in pursuing their objectives, such as technological developments in the medical field, or the possibility of technology to control us, as feared by Elon Musk and others. Similarly, ethical questions can be asked about the extent to which technology is used to meet the need for attachment. To what extent is technology helpful to develop a sense of belonging and support, for instance, by making connections across continents and cultures that were otherwise not possible or to provide online services such as in the banking industry? Questions can, however, also be asked about the use of technology unethically by separating people through the creation of a digital divide where only some people reap the benefits of technology at the expense of others.

Furthermore, considering the need for pleasure maximisation or pain avoidance, it can be argued that the ethical use of technology might serve as a source of pleasure by making life more enjoyable, such as by bringing the outside world into the living room. However, it can also be argued that technology might be user unfriendly or even be abused as it is well known in the workplace, such as spying on employees or keeping emails on record against colleagues. It could also lead to addiction if used for instant gratification only.

Lastly, ethical questions can be asked about the use of technology to enhance self-esteem, for instance, by providing support enabling personal development such as education. Similarly, people without technological skills might try to avoid the application thereof. These are but a few examples to illustrate the relevancy of applying Grawe’s model, even outside the client-therapeutic relationship.

Overall, using and/or developing all forms of technology that violate the fulfilment of basic human needs as discussed here, constantly activate the fear-based system and even lead to pathology, can be regarded as unethical. But providing an enriched environment that activates approach-based motivational schemas that facilitate well-being and flourishing, can be classified as ethical conduct.

By introducing neuroscience to the current ethics debate, digital ethics as living philosophy can be defined as a systemic integration of digital technology and human minds in such a way that technology meets basic human needs and thereby advances human well-being, rather than harming them. This can primarily be carried out by using technology to provide an enriched environment that facilitates the fulfilment of these needs.

Based on the aforesaid, it can even be stated that the neuroscience of ethics is not relevant to the development and use of technology in all its forms only but to all encounters between human beings. Applying neuroscience to ethics as a living philosophy may thus contribute to the development of an ethical world that encompasses all spheres of life.

References

Cozolino, L., & Sprokay, S. (2006). Neuroscience and adult learning. New Directions for Adult and Continuing Education, 2006(110), 11–19. https://doi.org/10.1002/ace.214

Grawe, K. (2007). Neuropsychotherapy: How the neurosciences inform effective psychotherapy. Psychology Press.

Kantar, N., & Bynum, T.W. (2021). Global ethics for the digital age – Flourishing ethics. Journal of Information, Communication and Ethics in Society, 1(3), 329–344. https://doi.org/10.1108/JICES-01-2021-0016

Metz, C., & Schmidt, G. (2023, 29 March). Elon Musk and others call for pause on A.I., citing ‘Risks to Society’. The New York Times. Retrieved from https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html

Rossouw, P.J. (2014). Neuropsychotherapy: Towards a theory of brain-based therapy. In P.J. Rossouw (Ed.), Neuropsychotherapy: Theoretical underpinnings and clinical applications (pp. 21–41). Mediros.

Stahl, B.C. (2021). From computer ethics and the ethics of AI towards an ethics of digital Ecosystems. AI and Ethics, 2, 65–77. https://doi.org/10.1007/s43681-021-00080-1



Crossref Citations

No related citations found.