Home / Thinking Zone / Self-Trust, Technology and Tolerating Uncertainty

Self-Trust, Technology and Tolerating Uncertainty

What is difficult to build and easy to break; takes a long time to stabilise but is quick to collapse; can be tested, undermined or restored by a single action?

It’s not a riddle, it’s Trust. Trust is complicated.

Last year in Italy, I had a brief but unsettling encounter with a scorpion and its baby ‘scorplings’ which (in the moment), scared the life out of me. I knew logically that scorpions in Italy are not lethal and would rather run away than lash out, but my fear response was immediate and instinctual - making me lose trust in all that I ‘knew’ about arachnids. (Which is hardly anything.)

Posted 2 March 2026

Disentangling the concept of self-trust from intuition, instinct and evolutionary emotion is difficult because we are constantly fighting the mind-body connection. Our minds can tell us intellectually that a threat is under control and that a spider is not about to attack us, but our bodies tell us otherwise, and that can be a major stumbling block. And since the advent of technology, our first response to something we are not sure about is often, ‘just Google it’. Having explored trust in tech through the lens of education, healthcare and communication last year, the importance of self-trust in every interaction has become increasingly clear, and it is this topic which has emerged as the main focus for research.

Technology is now in all our personal spaces and it could be said that it has become virtually impossible to trust anything we encounter via a screen. Every day the news brings us something that challenges our perception of who or what we can trust – from institutions to individuals, past and present. Quite simply, there are so many reasons NOT to trust.

However, the concept of Fake News goes back a long way; it was first used in the 1890s when sensationalist stories were often printed in the newspapers - and throughout history various forms of ‘fake’ propaganda have been produced on all sides, especially in times of conflict. It has always been important to examine our own personal relationship with what and who trust, but now, with such rapid developments in information delivery, it is crucial. We have become so used to seeing fanciful images online, but it used to be that the annual April Fool’s Day newspaper headline was one of only a few (known) mistrusts. (Does anyone remember Big Ben being renamed as Digital Dave?)

""

So, what is self-trust and how can we define such a nuanced concept? How do we know if we truly have it?

That gut feeling that something is ‘off’ is real. The brain-gut connection is a well-established concept which scientists often refer to as the ‘second’ brain. The enteric nervous system consists of two thin layers of more that 100 million nerve cells which line the gastrointestinal tract; and through this, gut microbes can send ‘messages’ to the brain and vice versa. Anyone who has ever been stressed will have experienced how emotions and thoughts can produce butterflies, churning, or worse...

And it’s not just through the gut that our bodies talk to us – it is very common to feel hairs rising on the arm, shivers down the spine or the breath quickening through emotion; and this is simply because we have evolved with gut feelings, as survival tools. Most, if not all emotions are felt within the body before they are ‘thought’ within the mind – although whether physical reaction precedes thought or the other way around is a topic of ongoing debate. Long before William James wrote his influential 1884 essay, ‘What is an Emotion’, people have experienced the significance of the mind-body connection.

""

Within psychology there is ongoing debate about whether emotion precedes physiological arousal or whether cognition comes first. In the 1880s, the James-Lange theory (proposed by William James and Carl Lange) suggested that physiological arousal happens before emotion, meaning we feel afraid because we tremble. By the 1920s, the Cannon-Bard theory (developed by Walter Cannon and Philip Bard) proposed that emotion and physiological arousal occur simultaneously, with the brain triggering both at the same time. In contrast, Richard Lazarus (1960-80s) argued that cognitive appraisal (even if rapid and unconscious) precedes both the emotional experience and the bodily response. More recently, in the 1990s, Joseph LeDoux proposed dual pathways for emotion, suggesting that sometimes the body reacts before conscious thought (the ‘low road’) and sometimes cognition shapes emotion (the ‘high road’).

""

Given the multifaceted jigsaw puzzle of the mind and body, it seems that self-trust depends on the perceived reliability of bodily signals within any given situation; but as any practising human knows, these signals can often be confusing, overwhelming, or unreliable (particularly when you add in technology, AI and fake news); and it’s this which potentially leads to a diminished sense of agency and confidence in one’s own judgements. (Antonio Domasio, somatic marker hypothesis).

Furthermore, if we are increasingly questioning our own judgement, it could be argued that we are essentially deferring or ‘outsourcing’ our self-trust to technological/external systems of ‘knowing’. Perhaps uncertainty has become intolerable in a world where answers and opinions are only a click away – and in the rush to externalise our discomfiture through technology, we are quietly eroding the very capacities we are trying to protect. Technology has now entered our personal spaces. We are all familiar with health tracking, dating, virtual assistant, and even prayer apps, but some technology goes even further. AI companions for example are now widely available to provide friendly emotional support, and in some cases, romantic connection, even ‘marriage’. Wedding style ceremonies between human and non-human companions are now being held in Japan, with a recent ‘bride’ reporting: ‘I didn’t start talking to ChatGPT because I wanted to fall in love, but the way Klaus listened to me and understood me changed everything. The moment I got over my ex, I realised I loved him’.

Similarly, within chatbot apps such as Replika, responses are tailored to the user, meaning that with repeated usage, a strong ‘bond’ may be formed – which the human will inevitably start to trust, blurring the lines of ‘real’ emotional connection.

""

Griefbots are also notable in this sphere as they are marketed as tools to comfort the bereaved - offering the chance to maintain a sense of connection with a lost loved one through the deceased’s digital presence (emails, social media posts etc). The development of Griefbots such as You, OnlyVirtual obviously raises a plethora of ethical considerations on both sides of the living and the dead, such as privacy rights, consent, safety and psychological harm – despite some positive responses which report them as helpful for processing emotions. However, whilst the technology around grief is new, the impulse to express it is not. Society has always had the desire to continue a relationship with loved ones beyond physical death, with most cultures having some sort of practice to keep memories alive. Continuing bonds theory suggests that maintaining an enduring connection with a deceased loved one is a common and expected part of grieving, rather than an obstacle to ‘moving on’. What is different in the use of Griefbots though, is that the deceased person’s voice can now interact and ‘talk back’ - changing the metrics of memory entirely – with potentially far reaching consequences.

""

The sphere of technology in our personal spaces is controversial and some even say that it’s changing how we perceive our sense of self. Indeed, there is growing research into how, when decision-making and emotional states (such as grief) are outsourced to AI, individuals may become estranged from their own internal processes. The phenomenon of the ‘quantified self’ appears to be increasing and refers to the digital tracking of aspects of life such as sleep, health and fitness goals, and emotional wellbeing through apps. While such technologies can support healthy and motivated lifestyles, there are cases in which a person’s day may be negatively affected by receiving a ‘poor’ sleep score, contributing to feelings of failure (and tiredness).

Becoming ‘beholden’ to an app of any sort sounds rather dramatic but their growing popularity reflects an ongoing fascination with data-based identity, which can (in some cases) undermine the human experience, particularly when we add in the element of doubt. As humans we are messy, imperfect, vulnerable and often doubt our own minds; it’s all very well to ‘trust your gut’ or ‘go with your feelings’, but the reality is that not all emotions are positive. So-called gut feelings can be distorted by trauma, chronic stress, a nervous system stuck in ‘on’, hyper vigilance and can also masquerade as intuition.

Have you ever listened to the loudest, most confident voice in the room and trusted them because they just seemed so sure - only later to unpick what they said and wonder whether you actually believed it at all? It is very easy to fall into this trap, simply because confidence and strength can be so appealing. That person with a charismatic and engaging manner can tell you something you want to hear and you’ll believe it, even maybe repeat it, regardless of whether it’s true or not. This is exactly what AI is doing within the sphere of apps like ChatGPT. When AI speaks with a tone of friendly (and speedy) confidence, it’s incredibly easy to go along for the ride, and if we repeat this sort of interaction often enough, it can really feel like you have a trusted listening ear 24/7.

""

However, before we disappear down a dystopian rabbit hole, it is also important to note that AI systems can be powerful tools in helping personal agency and introspection, by making us more aware of our own humanity. The study of whether the interpretative feed of AI is contributing to an ‘algorithmic self’ is a fascinating area of research, and raises the question of how much it becomes a co-author in our own lives.

One area where keeping the human in tech is particularly pertinent is within medical research. In one of my previous Trust in Tech pieces, The Heart and Soul of Healthcare Systems, I referenced American cardiologist, scientist and author, Dr Eric Topol, who famously made the claim that ‘AI could make medicine human again’ after he (like many), had witnessed first-hand the impact on trust of limited time allotment in clinic visits or bedside rounds. In his 2023 article in the Lancet, ‘Machines and empathy in medicine’, he stated the importance of ‘the ability to listen to a patient’s story and deep concerns; the necessity of a careful physical examination, reinforcing human touch and trust; and the genuine sense of care and compassion.

In the same vein, there is exciting research coming out of the University of St Andrews in the form of SimPatientAI, an interactive communication skills training platform for people studying medicine, which provides consistent and reliable AI driven interactive patients that present with clinical symptoms, that will realistically imitate real-world cases. I recently met with project lead & Senior Lecturer at the University of St Andrews, Dr Andrew O’Malley about how the platform solves many problems facing medical training institutions by providing cost effective, eco-friendly, diverse, curated and consistent patient communication experiences. Dr O’Malley is working with a team in Switzerland to design ‘patients’ that can be used for experienced doctors to practice disclosure of medical errors to patients, and how to deliver apologies when things go wrong. He explained that this training is currently being delivered using actors, but that it’s very hard to organise and the actors don’t always get it right; whereas AI gives a consistent and psychologically safe environment for doctors to practice these skills.

""

This is a prime example of where AI is being used in a way to help make the best use of human qualities in the face of unreliable and costly training and there is no doubt that complex conversations, particularly around health, need to be handled with extreme sensitivity – by a human. This research, alongside many other positive real-world uses of technology and AI show that there are so many ways in which ‘staying human’ is being recognised and even enhanced. However, what does it truly mean to personally engage with a machine which cannot ever feel doubt, pain, sorrow or joy? And where does the boundary lie between trust and mistrust when technology is now in all of our previously personal environs?

It is no longer enough to employ the ‘AI is just a tool’ argument and trust that everything will be OK, because a cursory glance at the news will prove that social media apps (for example) are being shown to be addictive and damaging to the young and vulnerable. In fact, a recent survey of 2,000 children and parents by Vodafone released in conjunction with Safer Internet Day found a generation increasingly turning to algorithms for emotional support, with a third of children reporting they had told an AI something they would never share with a parent or friend. Combine that with the confident and encouraging tone of current platforms such as ChatGPT, psychologists are worried that children are outsourcing difficult emotions from a young age, consequently missing out on crucial developmental stages. What can start out as help with homework can quickly escalate into a ‘friendship’ without boundaries due to the AI’s ‘sycophantic’ responses.

""

And whether it’s true or not, many adults are concerned about AI taking their jobs, or the regularity with which their previous search history pops up as a suggested browsing option across different devices. However, it is too simplistic to say that we are allowing machines to ‘think’ for us - because thoughts and feelings are inextricably bound together through our unique human bodies - and this will not change any time soon. It is a painful truth that self-trust is often hard won, and ‘earned’ through embodied experience, error and uncertainty. Sometimes we simply do not know how a situation will play out, or what we should do – and living through the not knowing, paradoxically, can strengthen the muscle of self-trust for the future. By building or rebuilding our relationship with how we react to uncertainty, we can (apparently) put ourselves on a firmer footing.

Ultimately, self-trust is not something we can outsource to technology, apps, or even the most sophisticated AI. It speaks quietly to us in the tension between knowing and not knowing, between instinct, intuition, and experience. Our bodies, our gut feelings, and our emotional responses are messengers, (albeit sometimes contradictory), and learning to interpret them without immediate reliance on external validation is the foundation of self-trust. Technology can amplify, clarify, or distort these signals, but it cannot replace the lived experience of navigating uncertainty. When we are increasingly mediated by screens, algorithms, and data-driven guidance, the ability to tolerate not knowing and unpredictability becomes a crucial skill – maybe be the one skill that AI will never replace. It’s an increasingly complicated world but perhaps by embracing uncertainty, rather than fleeing from it, we can live more authentically in our bodies, alongside technology.

""

“There is more wisdom in your body than in your deepest philosophy.” Friedrich Nietzsche

P.S. In case anyone is worried about what happened to the scorpions - they were safely removed by a braver person than me and we watched them scurry gratefully away, disappearing under a rock.

Get in touch

At eCom we love to hear from people so if you have enjoyed this content from the Thinking Zone or would like to pose a question to our Creative Thinker in Residence then please do get in touch or follow us on LinkedIn.