Demystifying Algorithms
Get in touch

There has been a lot of hype and hope around algorithms recently. But what can they do and what can they not do? How do they function? Can they feel like humans? Kasia Barczewska, Head of Research and Development at Cardiomatics, takes us on an eye-opening journey through Artificial Intelligence, algorithms and machine learning.

How to explain to a child what an algorithm is and what AI is?

An algorithm is a kind of recipe which describes how to solve a problem or how to achieve a goal. The goal is crucial – what exactly we want to do. If we’re going to bake a cake, a cake recipe is an algorithm which defines how to do this, starting from a list of ingredients that we need to have, ending with the temperature of the oven and the baking time. If we follow the algorithm, we will obtain a baked cake as a result. If we want to build a castle from blocks, an instruction how to do this is also an algorithm which lets us achieve this goal and get the same building that was on the picture on the box of blocks.

What is AI? A common term at a very high level of abstraction. It is the ability of computers to learn, generalise, use knowledge in practice and make decisions. Behind that popular term, there is just statistics and mathematical modelling. How can this be explained to a child? I would say: “AI is how my laptop can learn things like you do, my dear child. It learns much more slowly and needs much more examples than you do.”

What does the process of creating an algorithm look like?

So you ask about the algorithm of an algorithm. Essentially, it is a very creative process during which an algorithm designer is a child sitting with all the blocks in the world and is trying to figure out how to build a castle. The designer has to decide which blocks should be used and what steps should be taken to create that building. The effect of her or his work is both a beautiful castle and a book with instructions for others who would like to build it in the same way.

Another example can be a Japanese origami master who wonders how to make a crane from a square piece of card. The effect of his efforts is an instruction for other origami hobbyists who can fold the card in the same way.

Generally, there are a few steps which are very important in this process. The algorithm designer has to:

  • Specify the goal. For example, “I want to build a castle/bake a cake”, ”I want to create a crane”, “I want to detect atrial fibrillation.”
  • Define the set of assumptions/tools that can be used: “I have 200 blocks with specific shapes and colours/eggs, floor and apples”, “I have one square piece of card”, “I have a database of 24-hour recordings from 100 patients.”
  • Define exactly what the result of the algorithm should be: “I will build a Howl’s Moving Castle from blocks/I will bake an apple pie”, “I will make an origami crane”, “The result would be 0 or 1 depending on whether or not atrial fibrillation was detected in a 24-hour ECG signal.
  • Do the research: how do others do this?
  • List the steps that should be taken to achieve the goal. For example: writing down the recipe, noting pseudocode how the signal will proceed, which models will be used, which machine learning methods will be used to teach the models the proper statistics, which metrics will be applied in the evaluation process.
  • Determine how you will do these steps. “I will build my castle in my room, listening to music/where is the baking tray?”, “I will fold the paper by hand”, “I will write code in Python, train models on a GPU and save the best one in an hdf5 file.”
  • Evaluate/review the whole process. “Oops, the castle collapsed”, “I have to change the project”, “I have forgotten to add sugar!” It’s a crane that looks like a penguin.”
  • Test your model on new data and calculate the evaluation metrics. Analyse the new parameters and try to understand what they mean. Compare with other models or with state-of-the-art methods.

When the algorithm is ready, you feed it with data, and what happens next?

The computer can start learning. Computers process bytes of data to find patterns and get statistical parameters in which their knowledge will be stored. I can observe the learning process by looking at learning curves, relaxing with a coffee and waiting till it is finished. Sometimes, the model is ready before the coffee machine has finished grinding the coffee. But, in some cases, gigabytes of data and complicated learning strategies make the learning process several days or weeks long.

When the model is ready, it must be evaluated on data that was not used in the training process. I have to assess if it does what it was designed to do. And, if it doesn’t, I have to review the whole learning process.

Do we know step-by-step how the algorithm makes conclusions or transforms data?

Yes, we know. Everything depends on our decisions: on the data that we prepared to feed it, on the model that we chose, and on the learning strategy that we decided to apply at the beginning of the process. The computer’s knowledge is stored in statistical parameters that we can visualise at every step of the learning process. Of course, the simpler the algorithms are, the easier it is. The more sophisticated the algorithms, the more complicated it gets.

What is machine learning?

These are the best practices of how to train your models to predict effectively in real cases. You can imagine machine learning like a set of tools that engineers have to teach a computer and to evaluate its learning process. These tools are used in several steps: at the beginning, to prepare the material that will be used in learning, then to train and evaluate an algorithm. Among the “preprocessing tools”, an engineer has methods of data exploration, methods of data augmentation, a division of data into training and testing sets, selection methods to show/highlight only the most essential features in the data of the algorithm and facilitate the learning process. Then she or he can choose from a wide range of models or model architectures, which are the central part of the algorithm. Depending on the model, a proper learning strategy should be selected: What is going to be minimised or maximised by the algorithm during the learning process? How fast should an algorithm be learning? An engineer can use several different models and compare their results using evaluation metrics. The basis of all these tools is statistics and mathematics. All of them are stored in open source libraries and tutorials so that every engineer in the world can use them to train her/his model or to improve them.

When machines learn, do they multiply the mistakes in the data they use?

Algorithms are only as good as the data they use in the learning process. If there are systematic mistakes in the labelling of the data, the algorithm will learn them and will make the same mistakes in reality. That’s why it is essential, especially in medical applications, to consult the right labels (diagnosis) with many specialists and to gather a range of knowledge from training materials.

The problem of transparency is a problem with many levels of abstraction. At the higher level, people can personify “AI algorithms”, not knowing what they are, when they are simply a collection of statistical and mathematical models. At the lower level, the most popular algorithms are currently criticised for a lack of transparency because of millions of parameters which they have, and also for their entirely data-driven approach to learning, without the application of explicit rules or laws, which is an entirely different approach to learning than humans have. This criticism is good because it forces researchers to explore more and develop different methods.

Can AI do things without our control which could be dangerous for patients?

They are just tools in human hands. As long as they are used in good faith, they can support the doctor’s work. Of course, even in good faith, there is a place for human error: for example, in poorly prepared data given to an algorithm in training. Fortunately, in good practice, the final point of developing a model/an algorithm is used – an evaluation, which should be done very carefully. At this stage, we can assess if our method works as it should, and, if it doesn’t, we have to review the whole process.

How precise are algorithms? Can they forget things like people do?

The engineer’s role is to prepare the data accurately and to choose the best machine learning tools for the modelled phenomenon. If she/he forgets about something in the learning process, the data used in the training is unrepresentative of the phenomena, and the algorithm will not work well in reality. It will not forget, but it will not know that there are other distributions of that phenomena. Algorithms model the knowledge only in the way they were shown in the learning process.

What will algorithms never be able to do?

To love, to have family, to meet friends, to be curious and have hobbies, to relax and drink coffee. But thanks to algorithms, humans will have more time for doing these lovely things.

How will algorithms change medicine?

Dramatically! They will give doctors the possibility to make a holistic diagnosis based on automatically processed data from different examinations. They will find relationships between patient data and show these relationships to doctors. They will shorten the time between examination and diagnosis. They will give the doctor time to talk with the patient and to think about therapy. They will reduce waiting times for specialists. They will allow an increase in the number of detected cases of a given disease by making the fast analysis of long-term signals possible. They will facilitate examinations in places in the world where there is a lack of doctors.

How to prepare data for algorithms used in healthcare?

The preparation of data is a critical step, especially in medical applications, and must be done very carefully. A model will be as good as the data that was used to train it. Pairs of sample data with proper labels (proper diagnosis) are key to success. Ideally, they should be reviewed by several independent specialists. At this stage, a very close cooperation with doctors is required.

Can AI do something without our permission or control?

If the whole process of developing an algorithm was done well, and the algorithm was tested on the representative dataset, then we know exactly what to expect from this algorithm and how it will work in reality. If we did not pay enough attention to the testing, then different things may happen in a real application, and we will not understand why they happened.

If you ask me as an engineer if a well designed and tested AI algorithm can do something by itself without our permission or control, the answer is “no”. If we let it do things by itself, then yes. But then it is under our control because we expected this result.

Is it possible to build an AI system that will learn human feelings like empathy, compassion, sympathy, etc.?

To detect emotions, definitely yes. To feel emotions like humans? What does a “feeling in the context of a computer” mean? Making decisions in a specific way, depending on the gathered data? Triggering chemical processes somewhere? Or simulating facial expressions? The question is: if we understand human emotions in that way, could we describe them to the algorithm? Even if yes, these will only be mathematical models of emotions.

Can AI become more intelligent than people are?

What does it mean, ‘more intelligent’? There are different types of intelligence and different tasks in which we can compare humans and computers. And, of course, there are some cases in which engineers developed algorithms that surpassed humans. Such examples are old board games like “Go” and chess, in which algorithms can beat human masters. In the case of chess, the former chess master, Kasparov, admires the unusual style of how the algorithm plays and notices that humans can learn from new strategies proposed by the computer.

Of course, algorithms managed to reach such levels, because they processed millions of training examples that consist of games between humans. There are also publications in which researchers have shown that algorithms outperform single doctors when compared with diagnoses developed by a group of specialist doctors. This was also thanks to the fact that the algorithm learnt from millions of properly prepared training examples.

On the contrary, humans need only a few training examples to gain the knowledge that is enough to understand a new phenomenon. Humans can generalise their expertise in a much better way, or infer knowledge from just a short definition of a new term. They can associate facts from many different fields without having as many training examples as current algorithms need to defeat a chess master.

From the author: Kasia (Ph.D. in Biomedical and Medical Engineering) has an extraordinary ability to explain even the most complicated things. After an interview with her, I finally understood issues that have been bothering me for a long time. She is a passionate researcher at Cardiomatics – a cloud AI tool for ECG analysis.

Get in touch