top of page

The Black Box Problem: AI's processes are extremely complicated. So, how do we understand how AI thinks?

ree

One of the major problems of previous AI systems is a lack of explainability—that the operations of AI are shrouded within a “black box,” as is widely put. Anybody who really knows what they're talking about will be able to explain how they know what they know, and why they think what they do. How can we trust the AI, after all, if we cannot understand how it even reaches its conclusions, and even the AI itself doesn’t know how? However, I realized a new paradigm that could potentially represent a major leap in addressing this problem, and hold the capacity to fill the black box with light. I will share that method in a more formal document. In this post, let's take a step back and reframe the discussion.


Think of current AI systems like impossibly complex mazes - data goes in one end, answers come out the other, but the path between them is a tangled web of mathematical transformations. Looking inside these systems is like staring at millions of interconnected calculations happening simultaneously - numbers flowing through layer after layer of computations at a scale no human could possibly follow. Even when these systems get the right answer, we can't trace how they got there. It's as if we built a machine that can instantly solve complex puzzles, but when we open it up to understand its method, all we see is an overwhelming storm of numbers and statistics swirling together in ways that somehow work but make no logical sense to us. No matter how accurate these systems become, there's something deeply unsettling about relying on decision-making processes that are fundamentally unexplainable. After all, how can we truly trust or verify a system when we can't understand its reasoning? That's the heart of the "black box" problem that plagues AI today.


Think about someone you know well—maybe your best friend or a close family member. Really understanding how their mind works isn't about dissecting every little decision they make or mapping out every habit. No, you get to know how they think by having countless conversations with them, hearing their stories, seeing how they react to different situations, understanding their hopes and fears. You pick up on their little quirks, the way their eyes light up at certain topics, how they approach problems. It's a gradual unfolding that happens naturally through genuine connection and curiosity. You couldn't possibly document every neural pathway in their brain or create a flowchart of their decision-making process. But through relationship, through seeing them navigate life's complexities, you develop an intuitive sense of their inner world, their way of being. That's how we truly come to know minds - not through cold analysis, but through warm connection.


It would be nearly impossible to understand that by tracking every single neuron down, granularly build up, and observing detailing all the processes it does. Even getting started in such a process would be extremely difficult, due to the sheer number of directions the neural pathways go. Tracking requires understanding the directions. Experts have already suggested approaches such as incorporation of progressively smaller models explain the AI processes in a way that is understandable. How could that look in practice? It could be that the minute AI processes are processed by an ecosystem of models that all work together to track different aspects of the essence of intentional and holistic thinking, and come together to form a bigger picture.


Here, by endowing the AI with cognitive processes, and giving it a metaphorical “mind,” we can peer into the AI’s mind. Not only that, it helps shed light on all of our own thought processes. The creative process, in whatever work of art, is often elusive, but building a relationship and having conversations with the artist can shed tons of light on the author’s process in creating the work. In order to peer into a mind, there must be a mind in the first place. It would be very difficult to trace the process by which Van Gough painted The Starry Night from merely boring into every stroke of the paint, or reconstructing the detailed strategy that authors craft a novel. For those of you who've seen my post about copywriting, which is another way to say "experiencing the author's writing by rewriting every word," you understand more of how copywriting is powerful. It can give some experiential insights, but it still would not encapsulate the multifaceted richness and uniqueness of the original author’s intent and design. But engaging with data, not in terms of an incomprehensible labyrinth of statistical transformations, and instead in it in terms of connections within a mind, can profoundly transform the approach to AI explainability.



This post was written by me, with minor help from Claude 3.5 Sonnet

Recent Posts

See All

Comments


© 2022 by Dimension of Thought. Powered and secured by Wix.

bottom of page