AI Unveiled: The Enigmatic Journey of Artificial Intelligence

Ronald Berry
7 min readAug 24, 2023

AI is not about replacing humans; it’s about enhancing human potential. It amplifies our abilities, augments our intelligence, and empowers us to solve complex problems. Fei-Fei Li, AI researcher

Artificial Intelligence (Credit: girafchik123)

Introduction

Artificial Intelligence (AI) has garnered the headlines as of late with the emergence of ChatGPT and other AI solutions. Each day we’re hearing about the launch of new AI solutions, new use cases opportunities, and new challenges on its application and usage. What became a novel, curious discussion has become our reality. And we are now faced with not only learning about AI but addressing its possibilities, risks, and applications in our daily lives.

This advancement of technology to support business ideas (that have been in the works for decades…no joke), led us to launch Artificially Digital (AD) It is our objective to develop an AI solution that addresses a lot of the common maladies that afflict today’s AI solutions (i.e., user engagement, data quality, security.) AD = Artificial Intelligence + Digital Transformation. More to come on this in future articles.

But before we ahead of ourselves, let’s first discuss and understand Artificial Intelligence (AI). We’ve seen a lot written, and it’s all very confusing and ominously scary. So, we’ve put together a series of articles to enrich your understanding of AI (BTW, you can blame my Calculus 41 professor for this approach.)

The first article in our series, details the common components and terminology of Artificial Intelligence.

Let’s get started.

Artificial Intelligence Timeline

Artificial Intelligence (AI) is the application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI enables machines to learn from data, recognize patterns, make predictions, and even make decisions, mimicking certain aspects of human thinking and problem-solving.

And while AI is new for some folks, AI’s origins can be traced back to the mid-20th century when the field of computer science emerged.

AI Timeline
  • 1956: Alin Turline (featured in “The Imagination Game”) writes a paper “Computing Machinery and Intelligence” that asked the question: “Can machines think?” Later, the Dartmouth Conference marked the birth of AI as a discipline, where researchers gathered to explore the possibility of creating intelligent machines.
  • 1950s-60s: During the 1950s and 1960s, researchers focused on creating programs that could solve complex mathematical and logical problems. However, progress was limited due to the computational power and data availability constraints of the time. (In Episode 1 of “Happy Days”, I believe Richie’s brother, Chuck, went off to do some early AI research).
  • 1980s: In the 1980s, there was a shift towards statistical and probabilistic approaches to AI. Machine learning techniques, such as neural networks, gained attention. However, the limited availability of large datasets and computational resources hindered their widespread adoption. For context, the first computer I was exposed to was a Timex Sinclair, which had 1KB of memory. 1KB!
  • 2000s: The 21st century brought significant advancements in AI due to the convergence of several factors: 1) The exponential growth in computing power; 2) the proliferation of digital data; and 3) breakthroughs in machine learning algorithms. Also during this time, deep learning, a subset of machine learning, also achieved remarkable successes in various domains, including image recognition, natural language processing, and robotics.
  • Today: AI is advancing at an unprecedented rate, driven by innovations in computer technology. To put it in perspective, your smartphone boasts greater computational power than the guidance computer that steered NASA’s first mission, Apollo 11. Such rapid evolution underscores AI’s vast potential to revolutionize industries, reshape economies, and tackle some of the world’s most intricate challenges.

Artificial Intelligence Common Components

Let’s dive deeper into the common components that make up AI:

  • Machine learning (ML): ML is a subfield of AI that focuses on creating algorithms that allow machines to learn from data automatically. ML enables machines to learn and improve their performance and accuracy over time without explicit programming (i.e., autonomously).
  • Neural networks: Neural networks are a specific type of machine learning algorithm inspired by the human brain’s structure. Neural networks consist of interconnected nodes, or artificial neurons, that process and transmit information. Neural networks are excellent at recognizing and finding complex patterns in data. It consists of layers of artificial neurons: The first layer receives the input data, and the last layer outputs the results.
  • Deep Learning: Deep learning is a subset of machine learning that focuses on using deep neural networks with multiple layers. Deep learning is particularly effective in handling complex tasks, such as image and speech recognition. Think of it as adding additional filters in the processing of information to refine the output/insights.
  • Natural Language Processing (NLP): NLP is a field of AI that enables computers to understand, interpret, and generate human language in a way that is both meaningful and useful.
  • Computer Vision: Computer Vision is a branch of AI that teaches machines to interpret and act on visual information from the world, similar to how humans use their eyesight.
  • Cognitive Computing: Cognitive Computing involves systems that mimic human thought processes to solve complex problems. These systems are designed to learn and interact naturally, improving over time.
Artificial Intelligence Components

And foundationally, AI relies on the following components:

  • Data: The foundation of AI lies in data. Data can be anything from text, numbers, images, or videos. AI systems analyze and learn from this data to make intelligent decisions or predictions. And large volumes of structured and unstructured data are processed and used to train AI models, providing the necessary information for learning and decision-making. (i.e., Large Language Models).
  • Algorithms: Algorithms are sets of instructions that guide machines on how to process and interpret data. These algorithms help machines recognize patterns, make sense of information, and generate insights.
  • Computing Power: AI relies on immense computing power to process large datasets and perform complex calculations at a rapid pace. Advances in hardware, such as GPUs (Graphics Processing Units) and cloud computing, have significantly contributed to the growth of AI capabilities.

Common Terms

And while we’re at it, let’s make you more “Artificially Intelligent” and define common AI terms.

  • Training: In AI, training refers to the process of teaching an AI model using a large amount of data. The model learns from the data and adjusts its internal parameters to improve its performance.
  • Prediction: AI models make predictions based on patterns they have learned from the training data. For example, an AI model trained on medical data can predict whether a patient is likely to develop a certain disease based on their symptoms. This is what is referred to as Predictive AI.
  • Accuracy: Accuracy measures how well an AI model performs. It represents the percentage of correct predictions made by the model. Higher accuracy indicates more reliable predictions.
  • Large language model (LLM): LLMs are a type of neural network that learns skills — including generating prose, conducting conversations and writing computer code — by analyzing vast amounts of text from across the internet. The basic function is to predict the next word in a sequence. Typically, the larger the model should improve the accuracy of an AI model.
  • Generative AI: Is a technology that creates content — including text, images, video and computer code — by identifying patterns in large quantities of training data, and then creating new, original material that has similar characteristics. Examples include ChatGPT for text and DALL-E and Midjourney for images.

Ok, that’s a lot to digest so we’re going to stop here. In our next article, we’ll discuss common use cases and other considerations with the adoption and usage of AI.

Conclusion

Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks that typically require human intelligence. AI enables machines to learn from data, recognize patterns, make predictions, and even make decisions, mimicking certain aspects of human thinking and problem-solving.

The intent of this first article was to get you started on the path of learning about AI. We wanted to provide you with a clear understanding of the common components and terminology.

Remember, learning about AI is a journey, and it’s perfectly okay to start with the basics and gradually delve deeper into the subject. Embrace your curiosity, continue exploring, and don’t hesitate to seek resources or assistance when needed. AI offers exciting opportunities, and with this knowledge, you are well-equipped to embark on some cool adventures.

About the Authors

Ronald (Ron) Berry is the Co-Founder of Artificially Digital. Ron has extensive global experience and success in the B2B and B2C digital transformation spaces in a variety of industries ranging in size from startups to the Fortune 100. Ron holds an MBA from Wharton and a BSIE from Stanford University.

Dr. Shams Syed is the Co-Founder of Artificially Digital. Dr. Syed has extensive experience in software development, particularly in the artificial intelligence (AI) space for several innovative startups. Dr. Syed holds a PhD in computer science from University of South Carolina. Dr. Syed is renowned for his research, contributions, and publications in essential programming techniques, machine learning, computer vision, algorithm optimizations, and natural language processing.

Contact info@artificiallydigital.com for more information.

--

--

Ronald Berry

Ronald Berry is an executive with global experience and success in B2B and B2C digital transformation in a variety of industries and companies.