The Hidden Inventions That Led to Generative AI

Photo of Nat Chrzanowska

Nat Chrzanowska

Updated Nov 22, 2024 • 8 min read
Attentive technician working on laptop in server room

Amidst all the praise for modern AI labs like OpenAI, Stability, and Hugging Face, it’s easy to forget that the road to generative AI (GenAI) was a long and winding one, with many critical milestones accomplished in the past century.

The current pace of progress in digital technologies is mind-numbing. We’re inundated with news about novel tech every day. It’s easy to forget that digital wasn’t always the El Dorado it is today. AI itself has gone through decades of “AI winter,” when few people worked on it and most didn’t believe it to be a worthwhile pursuit.

Then, in the biggest tech moment since the iPhone, we were all suddenly granted access to ChatGPT and now the AI summer is in full swing.

To appreciate the weight of this development, it might help us to go back in time and explore technological developments without which we wouldn’t have GenAI today.

Lowering the barrier to entry into AI development

One of the main things that has enabled recent progress in AI systems is the relatively low barrier of entry.

The most popular programming language in this area is Python. It’s the #1 language for researchers, not just in AI but in any domain that involves working with lots of data. It has a beginner-friendly syntax that resembles natural language and a rich set of ready-made tools that enable complex computations and mathematical modeling without lots of coding.

Python was first released in 1991, but it took several decades of innovation before that to make it possible. Back in the 1950s, programming as we know it didn’t exist. Computers were enormous and you didn’t really program them; you just gave them machine code instructions via an interface of switches, wires, and punch cards – that is, until compilers were invented.

Compilers translate programming languages into machine code. This invention is what enabled the creation of all current programming languages. It was one of the first steps in demystifying machines and making them usable for people who weren’t world-class computer scientists.

We can thank Grace Hopper for this , along with her development of the first computer debugger, and her driving role in the creation and popularization of the first human-friendly programming language, COBOL – a progenitor of Python.

Grace Hopper was driven by the vision that computers should be able to communicate in natural language and support humans as copilots in their daily work.

Making programming languages more accessible set the stage for the technological advancements we’re seeing today. It opened the doors for many people to get into programming and was a huge step in the march toward sophisticated generative AI systems.

Helping computers understand natural language

With state-of-the-art computing being as limited as it was in the 1960s, people like Grace Hopper, who imagined computers understanding human language, were outliers. But this vision was necessary to spur scientists to develop the machines’ capabilities for communicating.

This required a whole new approach, a separate avenue of scientific pursuit. The avenue that resulted was natural language processing (NLP), and the seed for it was a seminal paper entitled “Synonymy and Semantic Classification.” It came alongside the development of the inverse document frequency (IDF), a concept for measuring the importance of words in text, which is a key foundation of how modern search engines work.

Both inventions came from the brilliant mind of Karen Sparck Jones . Jones's work in computational linguistics and information retrieval laid the foundation for a host of NLP applications. Researchers working on chatbots, sentiment analysis, and language translation all benefited from her inventions.

Large language models (LLM), the poster child of GenAI, are the single largest achievement in the field of NLP to date.

Teaching software to understand our preferences

We all have different needs and preferences, and current AI systems know them really well. But how did machines acquire the ability to understand our preferences?

A big part of it was Firefly. It was an intelligent software agent created in the 1990s under the guidance of Pattie Maes at the Software Agents Group at MIT. It was designed to learn from a user's actions and preferences to generate personalized recommendations, one of the first examples of software personalization.

"Collaborative filtering," as Maes called it, played an instrumental role in the evolution of modern AI systems. The technologies developed by Maes contributed to the creation of recommendation engines and chatbots.

These advancements also served as the building blocks for the development of social networks, marking the beginning of a new era of personalized information sharing​ – information that is now used to train AI systems.

Happy accident, the emergence of AI agents

In the labyrinth of AI development, an avenue less spoken about, but of equal significance, is the proliferation of AI agents – software tools energized by artificial intelligence that assist programmers in generating accurate and functional code. The powerhouses of this realm, such as GitHub CoPilot and OpenAI's Codex, owe their existence to ideas that were first conceived over 80 years ago.

The journey of AI agents into the world originated from a happy accident. OpenAI, during the development of GPT-3, didn't aim to design a coding tool. Yet, they soon discovered their creation's nascent programming skills. These capabilities were then fine-tuned, birthing a whole new category of AI assistants that are reshaping the field of programming today.

AI agents can be traced back to the pioneering insights of the previously mentioned Grace Hopper, and her vision of computers with human-like capabilities. The idea that computers would communicate like humans was pure heresy back then, but Hopper realized it was the obvious course of digital evolution.

Enabling software to explore and learn by itself

Before AI became practical, there were many years of experimentation with different concepts. Most of them failed because computers weren’t powerful enough to implement them. This lack of computational power was one of the main causes of the AI winter.

One of the biggest breakthroughs in AI development was generative adversarial networks (GAN). Or at least that’s the name they gained in 2014, but the concept has been around since the early 1990s when it was called artificial curiosity.

It was the brainchild of Juergen Schmidhuber and his team. In 1990, as a way to implement his concept of artificial curiosity, he created unsupervised adversarial neural networks that competed with each other in a simple game. It was promising, but blocked by the limits of computational power.

Juergen has been fascinated with embedding machines with autonomy and curiosity since the very beginning of his scientific career. His work consists of multiple huge breakthroughs, all foundational for current AI systems, which is why he’s often called the “father of modern AI.”

Among these breakthroughs are recurrent neural networks (RNNs), which enabled computers to process sequential data like speech and natural language. In 1996, Juergen and his team published architectures and training algorithms for long short-term memory (LSTM), an RNN that is currently used across the whole digital spectrum, from robotics to video games.

Back in 1991, he also created the equivalent of Transformers with linearized self-attention – the very technology that drives ChatGPT. It took 30 years to fully realize the potential of Juergen’s concept.

Most technological progress goes unnoticed

Most people didn’t even believe AI systems like ChatGPT were possible until OpenAI’s surprise release of the product at the end of 2022. But visionaries have seen this coming not just decades but centuries ago, as Juergen Schmidhuber points out .

The success of OpenAI was made possible by many years of scientific research and grueling work by visionaries like Maes, Hopper, Jones, Schmidhuber, and legions of other gifted engineers and thinkers. As we enjoy the fruits of their labor and wonder about the consequences of the AI revolution, it’s important to remember what drove the development of this technology.

It was the idea that humans are more important than machines, so we should do all we can to ensure that AI helps us, augments our abilities, and serves a positive purpose.

Photo of Nat Chrzanowska

More posts by this author

Nat Chrzanowska

Creative Producer at Netguru
Efficient software development  Build faster, deliver more  Start now!

Read more on our Blog

Check out the knowledge base collected and distilled by experienced professionals.

We're Netguru

At Netguru we specialize in designing, building, shipping and scaling beautiful, usable products with blazing-fast efficiency.

Let's talk business