avatarFabio Matricardi

Summary

The article "How to start in AI — 2024 is the perfect time!" provides an accessible guide for beginners to enter the field of Artificial Intelligence, emphasizing that it's never too late to start learning, even without a science background.

Abstract

"How to start in AI — 2024 is the perfect time!" is a comprehensive guide aimed at individuals interested in leveraging the potential of AI but unsure where to begin. The author, Fabio, shares his personal journey of learning AI at 47 years old with a background in Philosophy and Theology, demonstrating that age and academic background are not barriers to entry. The article outlines the foundational concepts of AI, focusing on Generative AI, and provides practical steps to start building a personal AI assistant using free resources such as Python, Google Colab Notebooks, and open-source models from the Hugging Face Hub. By following the provided instructions and code snippets, readers can learn to ask questions, summarize texts, and extract main topics from articles using AI. The article encourages readers to engage with AI technology, offering additional resources and a call to action to follow the publication "The AI explorer" for ongoing learning.

Opinions

  • The author believes that the perceived complexity of AI is surmountable, and anyone can learn to use AI effectively.
  • There is an emphasis on the idea that AI is not just for experts or those with a technical background.
  • The article suggests that Generative AI, particularly Large Language Models (LLMs), is a more approachable starting point for beginners compared to the broader field of Machine Learning.
  • The author conveys enthusiasm for the democratization of AI, highlighting the availability of free tools and resources that lower the barrier to entry.
  • The opinion is expressed that hands-on experience with AI tools is crucial for understanding and appreciating the technology's capabilities.
  • The author encourages continuous learning and experimentation with AI, suggesting that it is a dynamic field with much potential for personal and professional growth.

How to start in AI — 2024 is the perfect time!

You are in the right place if you have heard of the amazing possibilities of Artificial Intelligence but you don’t know how and from where to start…

image from lexica.art

So, you are interested in Artificial Intelligence and you want to make good use of it. That is super good. But you don’t know from where to start, or maybe you even think that you are too old for it…

Let me tell you frankly: it is not too late, and it is easy to start.

I am Fabio, I am 47 years old, and I started learning Artificial Intelligence less than 2 years ago. I don’t have a science academic background either: I have a degree in Philosophy and Theology 🧐. After 8 months I was already writing articles here on Medium.

If I managed to do it… well you can certainly do it too!

At the end of this article you will be able to do yourself the following, with your own AI:

  • Ask questions and get answers
  • Summarize a long text
  • Get the main topics of an article

You are in the right place

The main premise of this publication can be summarized with the curator himself, Nitin Sharma:

But while we see how rapidly AI is progressing, most of the individuals are clueless about how to use AI…

So, first thing first, you are in the right place. Here in The AI explorer you can find articles about AI that actually help anyone who wants to learn to use AI.

In this article I am going to share with you the foundations (terminology and technologies) of Artificial Intelligence, with a focus on Generative AI. Don’t worry, it is not only theory: we will jump start into Python programming with only free resources and start building our personal Assistant.

Buckle up and let’s start!

TABLE OF CONTENT
-------------------------
What the hell is AI?
How Generative AI works?
OK, in plain English, please?
How can I start?
  - Ask questions and get answers
  - Summarize a long text
  - Get the main topics of an article
maybe you think of AI like this… is much more simple

What the hell is Artificial Intelligence?

Artificial intelligence (AI) is basically the intelligence that machines can exhibit.

It’s not quite the same as natural intelligence, which is what we humans have. AI is the science and technologies related to computer systems being able to do tasks that we would normally consider to require some kind of intelligence, like reasoning, problem-solving and learning.

There are a few different areas of AI research, including:

  • Machine learning: This is where computers can learn without being explicitly programmed. They do this by using data to identify patterns and improve their performance over time.
  • Natural language processing: This is where computers can understand and process human language. It is used in things like chatbots and machine translation.
  • Computer vision: This is where computers can interpret and understand visual information from the world around them. This is used in applications like self-driving cars and facial recognition.

When the news usually talk about Artificial Intelligence they refer mainly to applications like ChatGPT: this is a Natural Language Processing task that we call Generative AI.

And here we will learn about this kind of AI. Machine Learning is a far more complex topic, and requires in depth studies. With Generative AI we can start almost from zero 😀.

If you want to learn more about Machine Learning and NLP differences, you can read this article:

Generative AI is NOT ChatGPT… only

How Generative AI works?

Generative AI systems like ChatGPT, are Generative Adversarial Networks (GANs). We can think of them as “creators” — they take input in the form of text prompts and generate human-like responses based on patterns learned from training data.

Let’s start with an easy explanation: Generative AI models predict the next most statistically plausible word, starting from the prompt (instruction) given by the user. The Neural Networks during the training are evaluating all the words according to different meanings and contexts.

In contrast to Machine Learning, which identifies patterns in existing data to make predictions or to guide decisions, Generative AI creates entirely new content that resembles the original dataset (well it can be words 📚 or images 🧩).

OK, in plain English, please?

It is time to go into practice mode. For now all we said is general information. To list it down what do we need to work with an AI — and not using ChatGPT?

  1. we need a generative AI model, also called Large Language Model (from now on LLM)
  2. we need a programming language to interact with the LLM — we will use Python because is the most popular language, with plenty of tutorials and learning resources online
  3. we need an environment to see the results of our interaction — we will use Google Colab Notebooks, a 100% free resource that anyone can use

After this it is up to you and evaluate the next steps: keep on learning or give up.

If have never heard about Google Colab in this article I explain in few easy steps how to get it and use it:

building Your AI brick by brick — lexica.art

How can I start?

Learning how to use generative AI like chatbots and large language models can seem daunting if you don’t have a lot of coding experience. But with the open source Hugging Face Hub and some simple Python libraries, getting started is easier than ever.

All the examples we use here are in my free GitHub repository:

Open a new Colab Notebook and start importing the libraries to interact with LLMs:

%%capture
!pip install llama-cpp-python==0.2.56
!pip install sentence_transformers
!pip install rich

!wget https://huggingface.co/brittlewis12/h2o-danube-1.8b-chat-GGUF/resolve/main/h2o-danube-1.8b-chat.Q5_K_M.gguf

The first 3 lines are about the libraries to interact with LLM. The last line will download in Google Colab a small but powerful Language Model released as Open Source by H2O.ai

After you run the cell (it may take few minutes) we are all set. We now need to load the LLM h20-danube-1.8b-chat into Google Colab memory:

## Load a llama-cpp-python quantized model
from rich.console import Console
console = Console(width=110)
mp = '/content/h2o-danube-1.8b-chat.Q5_K_M.gguf'
from llama_cpp import Llama
console.print("Loading ✅✅✅✅ h2o-danube-1.8b-chat.Q5_K_M.gguf with LLAMA.CPP...")
llm = Llama(model_path=mp,n_ctx=8192, verbose=False)

Ask questions and get answers

We have the model ready to be questioned (technically speaking… ready for inference). With the next cell, we will use python to ask you (the user) to prompt a question, and the model will reply!

# ask the user for the prompt
prompt = input("User: ")
# format the prompt in the way the model is expecting it
messages = [
      {"role": "system", "content": "You are a helpful assistant.",},
      {"role": "user", "content": prompt}
  ]
# store the reply of the LLM in the result variable
result = llm.create_chat_completion(
                  messages=messages,
                  max_tokens=300,
                  stop=["</s>","[/INST]","/INST"],
                  temperature = 0.1,
                  repeat_penalty = 1.2)
# print the question and the answer
console.print(f'[red1 bold]{prompt}')
console.print(result["choices"][0]["message"]["content"][1:])

Ok, now you will have to wait for about a minute… Yes, it will take some time. Google Colab is running with CPU resources, so the Neural Network of the LLM is slower with CPU and RAM only. This is the main reason why you may have read about NVidia and GPU cards as key factors in the advance of Artificial Intelligence.

Anyway, here the result: not bad right?

h20-danube result

Summarize a long text

In a similar way we can learn how to ask for a summary. The reality is that LLM know how to understand our prompt, so we just need to create the correct prompt for the task: everything else will be the same.

Let’s have a look:

context = input("Text to summarize: ")
prompt = f"""Write a short summary of the given this text extracts:
-----
{context}
-----

Summary:"""

The first line is the input request to the user (us): the difference here is that we called it context, and we expect it to be a copy/paste text from an article we want to summarize.

The prompt variable is called in python a f-string, a text where we can insert other variables (in our scenario the context). If you ever tried ChatGPT or Google Bard or other AIs, this follows the same principle.

context = input("Text to summarize: ")
prompt = f"""Write a short summary of the given this text extracts:
-----
{context}
-----

Summary:"""
messages = [
      {"role": "system", "content": "You are a helpful assistant.",},
      {"role": "user", "content": prompt}
  ]
start = datetime.datetime.now()
result = llm.create_chat_completion(
                  messages=messages,
                  max_tokens=300,
                  stop=["</s>","[/INST]","/INST"],
                  temperature = 0.1,
                  repeat_penalty = 1.2)
delta = datetime.datetime.now() - start
console.print(result["choices"][0]["message"]["content"][1:])
console.print('---')
console.print(f'Generated in {delta}')

I copy/pasted from wikipedia the text related to Machine Learning: let’s see the results:

Machine learning, also known as AI, is an area of research focused on developing mathematical models to teach computers to perform tasks without explicit instructions. It has become increasingly popular due to its ability to surpass previous approaches, particularly in natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. The mathematical foundations of machine learning include mathematical optimization methods, such as linear programming, which provide a framework for describing machine learning models. The history of machine learning dates back to 1959 when Arthur Samuel coined the term, while early researchers, such as Donald Hebb, Walter Pitts, and Warren McCulloch, contributed to understanding human cognitive processes, which laid the groundwork for machine learning technologies. The 1960s saw experimental “learning machines” like Cybertron, while 1970s research focused on pattern recognition, as described by Duda and Hart. In 1981, Tom Mitchell provided a formal definition for machine learning, which involves improving performance as experience increases, while modern-day machine learning has two objectives — classifying data based on models developed, or making predictions for future outcomes based on these models.

follow the GitHub Repo link and click on Open in Colab

Get the main topics of an article

Now you got the gist of it, right? To get the main topics into a list we simply need to ask for it with the right prompt. I did it like this:

context = input("Text to get topics: ")
prompt = f"""Given this text extracts:
-----
{context}
-----

What are the main topics? 
write them in markdown list."""
our list of main topics

Conclusions

With just few lines of Python we manged and learned how to ask questions to a LLM, summarize and extract the main topics from a long text.

Did you find it hard or impossible?

Follow The AI explorer publication to learn more about AI and to start experimenting yourself.

No doubt, everyone is clueless about what AI will ultimately achieve long-term, since this is just the initial AI phase.

So let’s make sure we are not among the people who doesn’t know how Artificial Intelligence works and how we can make use of it!

Hope you enjoyed the article. If this story provided value and you wish to show a little support, you could:

  1. Clap a lot of times for this story
  2. Highlight the parts more relevant to be remembered (it will be easier for you to find it later, and for me to write better articles)
  3. Learn how to start to Build Your Own AI, download This Free eBook
  4. Sign up for a Medium membership using my link — ($5/month to read unlimited Medium stories)
  5. Follow me on Medium
  6. Read my latest articles https://medium.com/@fabio.matricardi

If you want to read more here some ideas:

Artificial Intelligence
Python
Local Gpt
Open Source
Beginner
Recommended from ReadMedium