We have a tendency to be in love with our models, frameworks and methodologies. As I’ve written about before, obsessing about our processes and structures too much—or reinforcing them too formally—is never a good strategy. Nonetheless, we need some structure to work with and guide us in making sense of the world. A realization came to me while thinking about one particular model this week, that highlighted a particular tension. The more we know and apply our models, the less likely they are to bring about new ways of thinking. That creates a bit of a problem to solve.
We’ve all been challenged by that one person in the meeting who opposes everything, simply for the sake of opposing. Or because they’re afraid. Or because they just like to argue. The role of the devil’s advocate is challenged. It’s also challenging. But under the right circumstances, it can be hugely helpful.
When most of us think about artificial intelligence or machine learning, it’s in the context of computers taking over. There’s a very different view, that puts us in the driver’s seat. Insights from the bleeding edge of cognitive computing.
We tend not to like the idea of constraints. Boundaries are, after all, rather limiting. And yet they are also essential to creativity and innovation. The essential value of limitations in thinking big.
Disruption has become sexy, and the idea of disruptive innovation has come to dominate (and begin to spread beyond) startup culture. Nonetheless, execution and exploitation have their place as well. We need to organize in a way that accommodates both.
We are surrounded by increasingly strident exhortations to “think outside the box.” It’s an interesting expression, and one that has almost become a cliché.