Patterns or getting to Objective Truth – A thought piece on Artificial Intelligence

Summary

The concept of pattern recognition is deeply ingrained in nature and human biology, allowing us to identify correlations between environmental cues and take mental shortcuts. However, these patterns can also lead to maladaptations when our environment changes, as seen in humans’ tendency towards sweet and fatty foods despite modern nutritional risks. This raises questions about the limitations of human thinking and the potential for AI systems to overcome them by recognizing patterns that don’t make sense to us on a conscious level. But if we cannot fully understand how patterns arise in human thinking, can we truly verify if an AI’s learned pattern is correct? This thought piece explores these concerns through a philosophical lens, questioning the nature of truth and how it can be defined in both scientific and AI contexts, as well as touching broader implications on how this effects financial markets.

Preface

This is one text of a collection of essays and thought pieces that lies at the intersection of AI and other topics.

I’d highly appreciate to receive ideas, feedback and opinions, as well as engage in meaningful discussions about the topics I cover in this collection of essays, which I seek to publish over the course of the next weeks and months.

I am aware that they might have some typos, punctuation or grammatical errors, as I am just an individual writing this is my free time off work.

I hope to engage with a community of people who share my passion for AI, and I’d highly appreciate getting some other perspectives on these topics as these texts are based on my understanding of AI, the world and some additional thoughts and notions. I might not explain things to their full extent, or in a way that makes my chain of thoughts a little bit hard to understand if you come from a different domain, so if there are sections within the texts that are not explained sufficiently feel free to reach out. For the sake of not burdening you with too much to read there will also be some over-simplifications in these texts.

Please don’t see this as something set in stone. I am open to hearing different perspectives and broadening my view on this matter, as I am sure that there are points of view out there that are so far out of my field of view that I am unable to consider them as of now.

Main

Our natural world is filled with patterns.

Humans and animals are able to identify these patterns (consciously and unconsciously) and act accordingly. Pattern recognition for the most part is really helpful (e.g. being able to tell when food has gone bad due to specific molecules binding to our scent receptors, or being able to look at the sky, see dark looming clouds and hear rumbling in the distance and be able to predict that a thunderstorm will be coming soon). Being able to recognize patterns creates the ability to take mental shortcuts and learn over time (Eating Food with weird distinct smell = sickness ->next time: avoid food with that particular smell, seeing specific clouds in the sky and hearing a particular sound = getting drenched → next time: seek shelter beforehand to avoid getting wet). Some of these patterns are so deeply engrained within our biology that we don’t even need to experience a specific situation in the first place (e.g. disgust reaction when encountering spoiled food).

However, in some cases, these shortcuts can have unintended consequences, especially when the environment changes that we live in. Taking the natural world as an example birds have been found to incubate round objects (such golf balls) which is likely due to an evolutionary pattern recognition process that certain objects with specific qualities ( roundness, certain size, color ) have a high likelihood of being an egg and the instinctual reaction would be to incubate it (Just a hypothesis). From an evolutionary standpoint we humans are optimized to prefer sweet and fatty foods as these are associated with a higher calorie count and therefore would provide us with more energy. This pattern would have given our ancestors a competitive advantage, as more energy would result in a higher likelihood of survival and passing on genes. In modern times however, this pattern is maladapted to our environment that includes foods high in refined sugars and fats (leading to health issues) and low-calorie artificial sweeteners (the body expects a spike in blood sugar levels and in anticipation releases insulin to store the sugar – that never comes, this has further consequences, which is outside the scope of this text).

This is how humans (and other animals for that matter) are flawed. It’s hard to override the patterns that we have engrained in us from an evolutionary standpoint, and I’d also dare to say that there might be patterns we follow without even being aware of, so overriding them wouldn’t even cross our minds in the first place.

To summarize: Patterns emerge from data collected from our environment (that we are able to somehow perceive) and patterns that might be beneficial in one setting might not be appropriate for a different setting.

Patterns can be prone to generating biases.

Human thinking in itself also has a black box nature. We still don’t have a comprehensive understanding as to how we come to conclusions and which underlying patterns shape human thinking ( on a collective and individual level). Would it be beneficial to first fully understand how patterns arise in human thinking and then apply it to AI, trying to replicate the way patterns arise? Or could we create AI in a way that patterns are even more efficient than the patterns we find within human thinking?

AI could find patterns that don’t make sense to us humans, as there might be correlations that we cannot perceive /​ are not capable of perceiving on a conscious level (e.g. tiny nuances in color in an image or micro-expressions we’d only see when playing back on video). We’ll just have to live with the fact that there might be correlations between things that don’t make sense to us on a human perception level.

This is where a problem arises: If we cannot make sense of the underlying patterns in the first place how can we ensure that the output is representative of reality?

AI is great at recognizing patterns. In some areas it is even better in recognizing patterns than humans are as it is able to take in additional information and might be able to correlate things that are not on our radar. We might be biased in our own thinking due to specific patterns we are evolved to see or prioritize. AI can create patterns taking into consideration all sorts of information. We just try to fine-tune it to meet the standards of what we want the output to be. Taking a dataset of pictures of dogs and cats as an example. AI is able to recognize that there are distinct differences between dogs and cats and will search for underlying patterns that make a dog a dog and a cat a cat. The underlying pattern as to how it came to its conclusion is not clear and there are multiple ways that AI can be fooled into believing a dog is a cat and vice versa. The Dog-Cat Algorithm when presented with new data (e.g. a picture of a penguin) will have issues classifying what that penguin is. It might however be able to tell that the penguin has qualities that make it fit in neither the dog nor the cat category. The bigger issue we face is that we ourselves don’t understand the patterns of how humans can separate dogs from cats and also how to transfer the pattern to AI.

How do we ensure AI learns the right underlying patterns of the world to generate output that is as accurate as possible? Are we even able to verify if a learned pattern is the correct one? With most systems we just see the end result or output. With explainable AI we might be able to tell that X and Y are correlated but still remain in the dark why this is in the first place –maybe X actually causes Y or maybe the data is to blame and if we had a more comprehensive dataset, we’d be able to come to a more comprehensive conclusion.

Most of the data we have are from the point of time now, we cannot clean the data for factors we cannot account for or are not aware that they can influence the data we feed to AI. There might be bigger scale patterns that we cannot grasp or take into consideration as the data needed for identifying these patterns spans multiple lifetimes or decades or millennia. This is to say if there are patterns at all – which we are made to believe because we have plenty of evidence around us that patterns exist.

When trying to make sense of the world we must take into consideration that the patterns we have learned might or might not represent objective truth. The AI systems we train are optimized to fit our understanding of the world or at least fit our reasoning on how the world works. AI can identify patterns that we might be unable to detect, for example in medicine in early cancer detection. We can verify if the AI was correct by then taking a probe, running a histological analysis and then concluding that it was cancer. We define what ground truth is supposed to look like.

However, I’d like to make a differentiation between ground truth (which is what we optimize our AI systems to detect) and objective truth (which might be different from ground truth, especially if there is ambiguity in definition – think when does cancer start being cancer? Is it when the histology report comes back positive, is it when a single cancerous cell is present, is it when the first mutation happens that will eventually devolve into a cancerous lesion?). Ground truth is what we define to be true due to arbitrary classifications and definitions. Objective truth is something that we might not be able to reach. It remains questionable if we need to reach this in the first place or if the proxy of ground truth is close enough (My take: ground truth will be sufficient for most easy tasks, the more complex the issue the closer we should try to match ground truth to objective truth).

To summarize this paragraph in a philosophical way: What is reality (truth) and how do we define it?

What is truth?

How do we get to truth? When we’re in a scientific context the answer is quite straightforward. We come up with a hypothesis and test it by collecting evidence and counter-evidence. When we have sufficient evidence, we can prove or disprove the initial hypothesis, and this is how we define truth.

I’m not certain if objective truth is something that is actually achievable or whether it is even something that exists, as we as humans are the ones to come up with concepts to label what we see and experience in our surroundings in the first place.

Another way to get to truth or at least to find explanations is making an observation ad tracing back how it got to be until we reach a satisfying answer. The issue with this is that we are also the ones to define when the explanation is sufficient enough. Otherwise, when trying to explain anything we’d be in an almost endless stream of cause and effect until we end up at concepts we cannot grasp or that don’t satisfy answering the initial question (e.g. ending up at the big bang or at the tiniest of particles moving somewhere).

In an AI context evidence and counterevidence are the data we present to AI. The AI’s job is to analyze the data, create patterns and then output something that represents truth (or in a broader sense what we want it to output).

This is not to say that AI cannot aid in the process of getting to truth by uncovering patterns or correlations that humans cannot perceive or logically come up by themselves (e.g. a certain demographic of people like men 25-35, living in the US, interested in cars and having watched video X also being very likely to watch and engage with video Y). This new piece of evidence can subsequently be used to make a case for proving or disproving a hypothesis.

Truth seemingly is not immutable. What we define as truth can change over time, if evidence builds up that makes the initial hypothesis unlikely. Truth therefore is in a constant state of fluctuation with an ability to change suddenly or gradually as we gather more evidence. This is what ultimately makes financial markets so hard to predict. There is a) a multitude of factors that need to be accounted for (from company-specific metrics, to politics, to world-events, etc.) and b) a constant change of what we or the market considers to be true (i.e. high levels of ambiguity), that cannot be traced back to historic data as the situation we face in the markets is usually unprecedented. We have too little data to make predictions with a high enough accuracy to be able to predict the market however, the more quality data we collect the more accurate we can make the models. The issue here lies within the term “quality data”, as for this context this would most likely include data that is either unstructured or not available at all (e.g. true public or investor sentiment; social media could be used as a proxy, but would most likely not be reliable enough, as social media sentiments would be biased towards extremes). For these reasons I doubt that we’ll see reliable market prediction models in the near future.

Nonetheless, data is an extremely valuable asset in this day and age. The more quality data we have the more accurate we can make our models and there is money to be made here. The question remains how is quality data going to be collected in the first place?