Decoding AI's Technical Jargon

Try this experiment next time you’re at a dinner party: Drop the term “AI” into the conversation and see what happens. Chances are you’ll end up on one of the following topics: Skynet, HAL, “Blade Runner,” dystopian science fiction in general, Elon Musk, Bill Gates, billionaires in general, “the new joblessness,” robots, robot dogs, robots in general, or Brett Kavanaugh (OK, you would have ended up on that last one anyway).

What do we talk about when we talk about AI? The iconic textbook “Artificial Intelligence: A Modern Approach,” by Peter Norvig and Stuart Russel, lays out no fewer than eight (!) definitions of AI. This bombshell appears not toward the end of the 1062-page doorstop, butright on page two. At which point anyone not majoring in the subject can be forgiven for slamming the book closed and moving on to more straightforward pursuits, such as classical piano or BASE jumping.

Part of the problem is language. As Ben Evans noted, the term “artificial intelligence” tends to end any conversation as soon as it begins. It is a term that both intimidates and confuses, a hazy signifier onto which all sorts of other things are glued. But many of those other things, it turns out, have clear edges worth exploring.

What follows is a ground-up description of the common tools and technologies people use every day to build AI applications. My focus is not on the high-level uses of AI in marketing; that is a topic well covered elsewhere, and, besides most of already know the basics: for example, that computer vision is used to recognize logos in images, and that Alexa uses natural language processing to parse human speech.

Instead, my goal is to provide a plain-English explanation of what we encounter once we venture out of the marketing department into adjacent corridors. Often the purview of engineering and data science teams, these foundational building blocks are within the conceptual reach of any curious mind, and the savvy marketer is wise to make their acquaintance.   

Note: I have deliberately simplified and compressed the topics below, since each is the subject of numerous books as well as courses available on Udacity and Coursera. Also, for the sake of simplicity, I use the terms “AI,”  “machine learning” and “deep learning” interchangeably; this is not strictly correct, but correct enough for our purposes. 

Python: a genus of non-venomous snake found in Asia and Africa. See, isn’t this easy?

Just kidding. Named after Monty Pythonand not a reptile, Python is a high-level programming language used widely in data science and machine learning, beloved by developers for its elegant syntax and extensibility. Python’s core philosophy is often summarized in aphorisms like “beautiful is better than ugly” and “simple is better than complex.”

NumPy: does not rhyme with “lumpy.” Short for “numerical Python,” NumPy is a library designed for really really fast scientific computation. It is accessible within Python but performs mathematical calculations hundreds of times faster than standard Python. Which is great, because AI applications like to do all kinds of gnarly calculations using multidimensional mathematical objects (see linear algebra).

Pandas: rhymes with, um… pandas. Derived from the econometrics term “panel data,” “Pandas” denotes a package for data manipulation and analysis in Python. One major factor driving the recent success of machine learning algorithms is the vast amount of data available for training those algorithms. Built for fast data analysis and flexibility, Pandas are as perfectly designed for data analysis as those other pandas are for chomping bamboo.

PyTorch (& TensorFlow / Keras): Often confused with an obscure French baking technique, PyTorch is actually a framework for building and training neural networks. Deep learning is based on artificial neural networks, which — stay with me — are built from simplified models of biological neurons called perceptrons. PyTorch is an open-source project maintained by Facebook’s AI Research team, while TensorFlow is Google’s equivalent. There are important differences between the two, but generally speaking, PyTorch is considered more coherent with Python as well as with the concepts of deep learning. Both rely on linear algebra. 

Linear Algebra: Think of linear algebra is a kind of geometric math, and its fundamental building block as the vector. On a simple x, y graph, where x is the horizontal axis and y is the vertical axis, the place they intersect is the origin. If you draw a straight line from the origin to any random coordinate on that graph — 2, 3, say — you’ve drawn a vector. Visually it will look like an arrow going up and to the right, and mathematically it will look like a single row or column of numbers. If you add another column, you’ll have a matrix, and if you multiply a whole bunch of matrices together, you’ll have not only the basics of deep learning computation, but also the irrepressible urge to buy that fast new MacBook Pro you’ve been eyeing.

Have fun.

1 comment about "Decoding AI's Technical Jargon".
Check to receive email when comments are posted.
  1. James Smith from J. R. Smith Group, October 11, 2018 at 12:36 p.m.

    Josh--regarding "linear."  Per your detail wouldn't that make it "matrix algebra?"  Liked the piece and found it helpful.

Next story loading loading..