Our technology is wonderful and amazing. We walk around with the 1980s equivalent of supercomputers in our pockets, connected around the clock to a network of unimaginable proportions. We have access to information resources that were undreamed of a few decades ago. But we often use those resources simplistically at best, and at worst in a completely uncritical manner.
As a species, we humans find ourselves in a difficult position. We are increasingly coming to rely upon our technology, and we are doing so to our detriment. And while this isn’t some rambling and ranting manifesto that the machines are taking over, it is a cautionary tale about what we know, what we think about and how we think.
For many of us, significant aspects of our workdays and personal evenings (and nights) are mediated by our devices. That has begun to have some interesting—and potentially challenging—consequences. Specifically, we’ve come to rely on technology to know things that we used to commit to memory. Where once we knew, now we Google. What we took the time to understand and master as information we have now outsourced to information stored on other people’s servers.
More specifically, we’re often defaulting to letting the technology do the thinking for us. And we seem to be okay with that. Even when we’re entirely capable of doing that for ourselves.
An early example of encountering this happened a few years ago. I was in Victoria for the wedding of a good friend. We were staying at a downtown hotel, and were heading out for dinner. I had recently acquired GPS (back in the days before iPhones meant you had it everywhere). So I programmed in the address, and proceeded to start driving.
Now, I know Victoria reasonably well. I’ve been there more than a few times, and I know the major landmarks, the major routes and the ways in and out of the downtown. So I don’t specifically require GPS to know where I’m going, at least not until I get to a specific address in an area that I’m not familiar with. Despite this, I programmed the GPS at the hotel, and started following it where it told me.
The challenge with that is that—within two blocks of the hotel—it guided me through a wrong turn. Now, I knew it was a wrong turn. At least part of my brain did. But nonetheless, I uncritically did what the technology told me to do, and wound up on a five minute detour to get back to where I should have.
There’s a built in question of why I did that. Where actual knowledge of where to go was—not dismissed, because I didn’t question it at the time—but mindlessly ignored. The ability to exercise knowledge and independently navigated got surrendered to rule-following behaviour. Even in a situation where the instructions were completely wrong.
The inherent challenge in this is that we have the capacity to think for ourselves. We just don’t necessarily do so. One of our more interesting design features is that we’re cognitively lazy. Which is a roundabout way of saying that we have a habit of conserving mental energy when we can.\
Daniel Khaneman, in his book Thinking Fast and Slow[] discusses the fact that we have two systems of thought, which he very originally describes as System 1 and System 2. System 1 is mostly thought of as our gut intuitive thinking system; it makes snap decisions and judgements. System 2 is our critical thinking system, that actually analyzes and evaluates.
When we rely on technology, we’re defaulting to System 1. We’re taking the easy way. There is little mindfulness or critical capacity; we take what we’re told at face value. Which means that something needs to be very wrong or unusual for us to question what is going on.
When we’re needing to actually think about what we’re doing—when we need to remember how to navigate in a system that we’re not overly familiar with, for example—we need to work at it. That’s where System 2 comes in. We can navigate. We can use our knowledge of position and direction to sense where we are and reckon where we need to go next. We can course correct as we need to.
The challenge is that we need to work at it. It takes effort and conscious though. And if we don’t have to do that, if there is an easier, faster way to get something done, our minds are quite happy to take that option. They will let someone—or something—else do the thinking for us.
This isn’t a bad thing, necessarily. It’s how we are wired. If we had to engage critically in every single decision, we would be exhausted. And we would make very few actual decisions. We’d be racked by uncertainty, obsessed by the need to analyze, and demanding more more information and data to guide our decisions. Our brains default to System 1 because it’s easier. We make snap judgements on routine decisions, and essentially let our biases and intuition guide our choices.
That’s where the challenges arise. Because we all have biases. And just because we know that, and think we know what our biases are, that in no way stops us from relying upon them. One of my favourite stories in Kahneman’s book is when he talks about his own experience with decision bias. He is a Nobel-prize winning economist. With his colleague, Amos Tversky, he pioneered the study of behavioural decision making. And he declares himself routinely susceptible to exactly the same biases as the rest of us, despite knowing precisely how they work.
So if a prize winning theorist and researcher can’t circumvent the influence of decision making biases, what hope is there for the rest of us?
What’s key between System 1 and System 2 is how we use them. System 2—the thoughtful, analytical one—tends to get triggered when we know we have a difficult decision to make. When thought and analysis is required, significant risks may be present, and consequences are high. It takes work and energy, so we theoretically make use of it when it matters. For everything else, we default to letting our subconscious function.
There are a couple—well, more than a couple—of challenges with how System 1 and System 2 operate in today’s world. For starters, System 1 now has Google. And that’s an interesting combination. Decision making is now a product of gut intuition and whatever the leading response to our search query is on a search engine.
I have experienced the consequences—good and bad—of that more than once. Getting a red wine stain out of a carpet? Google. Fixing a plumbing leak? Google. How to hang a door? Google. How to diagnose a problem with my car on the side of the highway? Also Google.
Like System 1 in general, that approach will be useful some to most of the time. Wine stain is gone. Leak is stopped. Door (mostly) worked as intended. The car, though. Googling my symptoms led to opinions that ranged from the catastrophic to the benign. Depending on who you believed, the only solution was a tow (I was 80kms from a dealer) and the very expensive replacement of my catalytic converter, or “just keep driving until something’s obviously wrong.” Google-augmented System 1 wasn’t doing well, and System 2 was just confused.
Which leads to a couple of other interesting consequences to how we engage in and attend to decisions in this day and age. We all have System 2. We can all move into a more analytical mode of decision making. The question is whether we are actually any good at doing that.
System 2 needs training. It needs to know how to analyze. It needs to understand what information is important, and what facts are inconsequential. System 2 presumes a range of critical thinking skills are at our disposal, finely honed and ready for action. And yet, when we don’t use those skills, they often instead become dull, rusty and incapable of incisive use. We pretend we’re doing analysis, and yet we continue to orbit our own biases once again until Google-plus-System-1 seems tempting, or we’ve simply run out of time.
In other words, the more we struggle to use System 2, the more we’re likely to lurch back to System 1. Unless we feel confident that we’ve got the tools to function with complexity, we look for simple answers and step-by-step solutions. Big scary problems either get ignored or dealt with in the most simplistic way possible, simply because the consequence of doing otherwise is too scary to even contemplate.
This got driven home for me recently in a volunteer situation. For reasons I will not attempt to defend here, I’ve taken on the role of production director of my local community theatre. I also happen to be lighting designer for our Christmas show. Now, theatre is my first love. I studied it, I worked in it for a time, and I found purpose and friendship and meaning in the work that I was doing. Which is all more than awesome.
I last worked in theatre well before the turn of the millenium, however, and a great deal has changed in theatrical technology. When I last lit a show, computerized lighting boards were just finding their place. In the several decades since, they have evolved enormously. What’s more, even the lighting fixtures are computers.
Diving back in, I’ve at times felt overwhelmed. Learning automated lights, DMX signalling, lighting board programming and operation, all the while remembering and embracing what is involved in actually being a lighting designer, all in a space that is relatively new to me. Sorting out one thing often led to four more questions. The automated portions of what used to be militantly analog were the most overwhelming bits. Despite the fact that I’m really, really comfortable with technology in general. I can network, I can program, I can configure. And yet an automated lighting board stopped me in my tracks.
Until one day it didn’t. And the difference was a simple one. What I was trying to do in learning the board was read and absorb the operating manual. That might not sound like an unreasonable thing to do. Except that my overwhelmed and time-crunched brain wanted simple, step-by-step explanations of what to do. System 1 was raging to the fore, and it was demanding clear answers.
What changed was quite literally waking up one morning and going, “Wait a minute…” The concepts I knew in an analog world are still operating in the digital one. Lights get circuited. Circuits get patched into dimmers. Dimmers get assigned to submasters. And the automated part is that the board remembers the cues, so you don’t have to manually set them up anymore. (And if all of the above is Greek to you, that’s just fine).
The point is not learning the how-to steps. The essence is understanding the concepts and principles. Do that, and it leads to many different ways that you might accomplish what you want to do. It’s a roundabout way of getting back to context. It’s why the right answer always depends. If you know the building blocks, then you learn all the configurations by which building blocks can be connected. And when you get really good, you can invent new configurations as well.
The consequence was that I went from days of desperate reading to a quiet afternoon of exploration that resulted in a simple declaration of, “I’ve got this.” System 1 can quiet down, because System 2 has figured out a new way of reasoning. And once the planning is done, System 1 can happily take over and merrily get on with the business of doing the involved—yet comparatively speaking—mentally undemanding work of getting the show configured.
We live in a marvellous time. The devices we carry in our pockets are amazing and powerful and wonderful, and give us access to a wealth of information that was previously unimagined. But at the same time, we are seriously risking taking that information for granted. And using it very, very simplistically. The real potential consequences is that when it really, really matters, chances are that a critical decision may not—and likely will not—get the attention or analysis that it needs.
To be clear, I’m not arguing for full on Ludditism here. I embrace and love and adore my iPhone, and I’m not giving it up. Search engines are wonderful, and information—available, accurate, reliable information—at our fingertips is an incredible privilege and a tremendous resource. But critically thinking about how we use that power is important. And more essential is building, refining and maintaining our ability to think critically when it matters.
Put simply, we need to learn how to learn (or re-learn). We need to keep learning. And most importantly, we need to recognize when switching out of autopilot and into working in a more analytical fashion is necessary. Like following the GPS, that starts with being aware of when we are relying on technology to tell us what we already know how to do. That should be our cue that we’re not paying attention, in a context where we possibly should.