Demystifying Artificial Intelligence (AI)

+

Unwanted AI buzz

If you’re tired of all of the Artificial Intelligence (AI) buzz recently, you’re not alone. Most of us in the tech industry are as well. In this post, I’ll outline what AI is and how it works, why you’re hearing so much AI hype nowadays, and how AI functionality will eventually be built into the software products we use.

So, what is AI?

Firstly, AI is nothing new. The term itself was coined in 1955 by John McCarthy as a way to discuss how computers could make decisions based on inputs and logic provided. However, the idea of intelligent machines (or robots) has been popular in sci-fi writing for the past century, and can be found as far back as Samuel Butler’s 1863 article Darwin among the Machines.

Moreover, human-like generated linguistics have been common since university academics and grad students could get their hands on decent PCs to program them. The one I remember vividly from my university days was the postmoderism generator created in 1996. Each webpage refresh generates a whole academic paper (including references). If your university professor was the kind that didn’t actually read final papers, you could hand one of these in and score a decent passing grade for presentation alone.

The AI you hear about in the news today is based around a field of computer science that has been advancing since the 1950s called Machine Learning (ML). Modern ML takes large sets of data and correlates it. For example, by correlating the similarities between millions of pictures of birds, a program can be made to accurately detect whether an existing picture contains a bird. This program is called a model and takes time and computing power to create.

If ML models are instead designed to create new content (e.g., pictures of birds), then it’s called Generative AI. If ML models are designed to comprehend and generate language or text, they’re called Large Language Models (LLMs). And several models can work together to allow you to do things such as typing in a request in a text box and getting a response with the generated or summarized content you requested.

Under the hood, ML use a massive amount of matrix math to correlate data when creating and executing models, which is why it runs very well on video cards, or Graphics Processing Units (GPUs). However, many manufacturers create lower cost specialized processors called Neural Processing Units (NPUs) that are more suitable for certain ML tasks.

Is AI really intelligent?

In short, no. As the famous Alan Kay points out, ML by design is essentially reasoning by correlation, which is also the definition of superstition. ML can be used to generate content or use existing data to provide answers to questions, however it doesn’t actually understand the content and answers it provides, nor does it have the inspiration or creativity to provide results that are always desirable. Thus, any content generated by ML must be human reviewed and modified appropriately, and any answers provided by ML must be verified using alternate means (unless the data used to generate the model has been thoroughly verified and bounded to specific use cases).

This is known all too well to software developers. While ML is useful in generating general code templates (called stub code) and simple logic statements, it doesn’t have the contextual and reasoning knowledge necessary to solve problems. After all, ML just reasons by correlation. As a result, the following Linus Torvalds comment to a Linux kernel developer was taken out of context, applied to AI, and spread across the Internet in developer circles:

Linus Torvalds quote

Why are people and companies pushing AI right now?

The tech industry has always revolved around buzz. Anything new equates to excitement. And since tech excitement can be spread very easily among those outside of the tech realm, such as business leaders and the general public, there’s opportunity for sales and investment. For startups, if your product doesn’t have the right buzz, you won’t raise capital. Several years ago it was VR and the Metaverse, a few years ago it was blockchain and crypto, and today it’s AI. While it may seem overt and silly, this practice (called the Gartner hype cycle) has been the norm in the tech industry since at least the 1970s. Of course, that doesn’t prevent it from being the subject of many jokes and comics, such as those shown here:

AI comics

Today, social media, ambitious tech startups, and business leaders are the key catalysts of the Garner hype cycle, and the related technologies are pitched in a way that must appeal to people’s imagination and fear of missing out (FOMO). Popular media and news will further pick up and ride the hype cycle, as will article and book authors. For example, you’ll find plenty of books today on Amazon detailing why business leaders should embrace AI, but most of them are akin to Ayn Rand novels that build up your ego and not much else.

That being said, not all large tech companies have bought into the AI buzz outright. Apple has famously resisted using the term “Artificial Intelligence” in all communication, opting instead for the underlying technology term of ML until this year when industry pressure forced them to join in on the buzz. But instead of using the buzz term, they modified it to “Apple Intelligence” in a brilliant act of both marketing and defiance. Similarly, they decided to implement it using a privacy-first, data-first approach that quickly received praise from nearly all tech professionals and pundits.

How will this AI stuff pan out?

As computing technology advances, so does the software and devices we use. We’re at a point in time where computers, smartphones, and Internet-connected gadgets can now run ML models, and most software running on these devices will certainly take advantage of AI for simplifying tasks. Most of this software will be from big tech companies that have the money and development resources to dedicate to ML, or from companies that pay larger tech companies for access to their ML models and frameworks.

This is why we’re currently seeing big tech companies like Microsoft, Google, Apple, OpenAI, and Nvidia fighting for their piece of the AI pie in the public spotlight – they stand to make the most money from ML in the future. They’re making deals, pushing hype, and testing both legal and copyright limits to see how governments will react in order to solidify their future roadmaps.

Everyone else, however, won’t have a direct say in how AI is developed, or make significant money from it. We’re sitting on the sidelines helping out the Gartner hype cycle. For us, AI will end up being a new series of useful features in our existing software and devices for obtaining, generating, or summarizing information that we’ll take for granted in 5 years. At that point, I’m confident that there will be a new buzz word in everyone’s vocabulary and AI will be a distant memory, much like how blockchain is today.

And in case it wasn’t obvious, this blog post was not generated by AI.