The Computer Version of Sherlock: Fighting Crime with AI
When I was a kid, I always loved Sir Arthur Conan Doyle’s work. As the creator of Sherlock Holmes novels, I found the concept incredibly fascinating. A man, that is able to farm data points on everyone around him, then use logical deduction using those same facts in order to solve crimes. I marvelled at the sheer intelligence that he demonstrated, in being able to apply knowledge like he did.
Truth be told, I’ve watched about every adaptation of Sherlock there is to be had, and a trend emerges: most of them, in some form or another, use Artificial Intelligence and the computer technology around them, to gain information on their viable suspects. Turns out, this is more than just movie magic; it’s in place in the world around us, and has the potiential to change the way we fight crime in the modern age.
Defining Artificial Intelligence
AI is one of those terms that had become almost a zeitgeist of it’s own. It’s a buzzword, associated with money, data-mining, and advanced robots, and everything in between. Yet, many don’t realize what it actually is.
Really, AI is just the application of algorithmic models, the same kind you learn in math class. It congregates data from a multitude of sources and then applies that data in a manner that is useful and valuable for the user.
Take Youtube’s recommendations, or your Tiktok feed. They are not just random videos that the web just brings to you, rather they are specifically tailored content that is based on what the algorithm thinks you would like. The algorithm bases it’s assumptions on it’s data about your past behaviour: which thumbnails you clicked on the most, how long you watched for, if you liked it or not, etc. The goal of this AI is to keep you clicking on more videos, and watching more ads, and thus making the platform more money. If you want to see how successful these algorithms are, just look at your screen time.
More popular forms of AI, like OpenAI’s ChatGPT, do the same thing. A more replicative application, ChatGPT is a language model trained on data that was made by humans: articles, modules, and other forms of work were fed into an algorithm, and using machine learning, the computer program was able to learn the information and is constantly learning more from people’s responses, in order to replicate it.
There are many different ways AI learns, the ones I’ve just described are examples of reinforcement learning: the agent, learns to work in a prescribed environment, from previously generated data, and we teach an agent how to behave in an environment to reach a goal. The AI is trained similarly to how you train a dog: it is given an award every time it completes the task properly.
Over time, these systems become more and more efficient, as they learn to fine-tune their processes and fit the determined outcome better and better. This is because they build on what they already know, similar to how humans build on their own knowledge in a trial-and-error process.
Technology and Crime
Technology has been a long part of criminology since its inception. It’s been a fixture in online offender databases, online footprints, tracking IP addresses, etc.
It’s also empowered many criminal behaviours. Bitcoin, for example, is notorious for its usage by criminal organizations, as a decentralized, trace-free way to conduct business. Many criminal syndicates now use crypto-currencies as a way to conduct business without being traced.
AI, however, has been a fixture of criminal investigation. In fact, it was one of its first applications.
Predictive Policing and Applications of AI to Solve Crimes
One of the oldest examples of this phenomenon is predictive policing.
Predictive policing is when law enforcement utilizes data analysis and machine learning to forecast potential crime hotspots based on historical crime data, demographics, and environmental factors. This allows law enforcement to allocate resources more effectively, as they are able to predict where and when crimes are going to happen.
One example of this is PredPol. The system was developed by a team of researchers, conducted at the University of California, Los Angeles (UCLA), and the University of California, Santa Clara (UCSC) in the early 2010s, in conjunction with local law enforcement officials.
The software’s development was based on a mathematical algorithm, derived from past criminal data: the type of crime, the location, and the time of occurrence to predict where future crimes might take place. PredPol stemmed from theories on the predictability of crime patterns and the idea that crimes tend to concentrate in certain areas and follow certain temporal patterns.
The software has had some hiccups, but has also proved successful for the most part, a study conducted in Santa Cruz reported a 19% reduction in burglaries and a 6% reduction in robberies during the 6 months following the implementation of the PredPol software by local law enforcement groups. Through this AI, crime was actively being stopped and predicted. PredPol, as it stands, seems most useful in property crimes, but with enough work and variable data, it can have applications beyond that. Especially if incorporated into a neural network-type model, it can even do the work of criminal profilers, and develop a comprehensive psychodynamic profile for criminals, perhaps even predicting their next move.
New York City’s Domain Awareness System (DAS)
AI is not just a text-based medium, and its data-harnessing capabilities apply to images and sounds as well. All of this is perfectly encapsulated in The Domain Awareness System (DAS), a surveillance system developed by the New York City Police Department (NYPD) in collaboration with Microsoft. The Domain Awareness System (DAS) integrates an extensive network of cameras, license plate readers, and environmental sensors, that is then used in real-time identification of potential criminal activities and suspects.
Systems like DAS accomplish this through the integration of a neural-network-type model, which is a subset of machine learning in which the program, in which models not only interpret data and predict it based on past knowledge but can analyze new possibilities as well. Crime can be stopped before it is even a thought in a criminal’s mind.
Ethical Concerns
When it comes to crime, there are some factors that an AI system cannot solve, and some it actively encourages.
For example, it is a statistical fact that people of colour disproportionately are incarcerated more than Caucasians. If you fed an AI model this information and then asked it to make predictions based on this, it would be more likely to choose people of colour as likely offenders, which would in turn lead to heightened incarceration rates and fuel a toxic cycle.
While this is not ideal, I would also argue that it is not the AI’s responsibility to take that into account, it is ours. AI exists as an objective metric, that is to be combined with humane actions on the part of its users.
Additionally, the inverse is possible with the same system as well. An AI system can also make it so that the biases that we hold as humans when we look at suspects are eliminated; with a purely logical being, it doesn’t matter how well-dressed you are, or how silver-tongued, you are treated the same.
How Far Away Is a Digitial Sherlock?
Truth be told, there is no easy answer: even though we have things that are close, these models are still in the early phases, and still have a long way to go. One thing is for sure, as is with all AI: the more data that is fed into it, the more the systems are iterated, the better they become.
There are also so many factors left to explore: think about all the psychodynamic data that can be added to it or the introduction of a financial AI model that tracks white-collar crime. Humans excrete data at an alarming rate, and it is free for the taking. Perhaps, we are on the verge of something new: a world in which data is no longer used to keep us doom-scrolling on our phones each day, but rather, to save our lives.