Amazing AI Tables
Recognition of Geometric Shapes
Published on Feb 16, 2023, and Last Modified on Sep 26, 2023.
We can easily recognize the outlines of female figures in Tatyana Mark’s beautiful miniatures. At the same time, no matter how advanced a biometric system is, it will not be able to cope with such a task. For the success of biometrics, it is necessary that three to four dozen parameters of an actual female figure practically coincide with the measurements of women in the pictures above. But the latter is impossible when you have only a few lines from a minimalist artist at your disposal.
This popular children’s game proves that a child of around two years old recognizes and correctly uses simple geometric shapes that she sees for the first time. In other words, children do not need any prior learning before playing such a game.
From the above, we conclude that neither biometrics nor Deep Learning based on large-scale prior learning has anything to do with human recognition.
Note that we do not criticize existing approaches. We only argue that in addition to known and successfully applied technologies, there should also be an alternative recognition method that people unconsciously and effectively use. This article is devoted to the search for this alternative.
Two general remarks:
- The impressive progress of gaming (Chess, Go) and machine translation programs, the promotion of an uncrewed vehicle, etc., is built on the simple processing of a vast number of options. In this regard, we talk about the success of artificial intelligence being dictated by conjuncture or simple ignorance.
- The figure below illustrates the complete helplessness of computer recognition. We cannot create a truly workable program, and it’s widely used to protect websites.
Introduction to AI Tables
One glance at the shapes above will be enough to find and identify the items that are similar. But what does “similar” mean? There is no strict definition, and science does not know how to compare geometric figures of different shapes. Only guided by our innate intuition, we made the mentioned pairs.
Example #1. We used the Hand-Drawn Shape Simulation (HDSS) package to create geometric shapes here. In the left table, called SHUFFLED, shapes are arranged randomly. Try to change their position so that similar shapes end up in a common column. One of the correct answers is shown on the right, in the SEPARATED table, but you can easily suggest any other.
Successful sorting allows you to introduce the concept of type. A shape type is the table column’s name that contains all the similar shapes. For example, the five-pointed stars above are now assigned type “C,” but another sorting option might change the type. All that matters here is that different shapes (e.g., five- and six-pointed stars) will be assigned different types.
Example #2. Let’s proceed to recognize the new shapes shown here in a different color.
We’ll match each new shape with all the elements (shapes) in the SEPARATED table, which this time we’ll place on the left. If the new shape is most similar, for example, to the six-pointed stars from this table, then we will assign it the “D” type in common with them, and so on. At the same time, we will add new shapes to the right, the final RECOGNIZED table, moving from top to bottom and in accordance with the type. As a result, the RECOGNIZED table will demonstrate all 18 recognition results.
So, the SHUFFLED table, filled with shapes in random order and then converted to a SEPARATED table, allowed us to determine the type of new shapes, i.e., recognize shapes. In other words, shape recognition was provided only by the contents of the SHUFFLED table.
The process of transforming tables SHUFFLED => SEPARATED is called self-study.
Two more self-study examples are shown below. For them, images were created manually by the INKredible drawing program on the iPad and then scaled to the table cell size of 200 x 200 pixels.
Example #3.
Example #4.
A New Model of Artificial Intelligence (AI)
Let’s imagine that there is a computer program on a remote server that can completely replace us when we operate with SHUFFLED, SEPARATED, and RECOGNIZED tables. Let’s call this program AI Tables.
By uploading a small set of objects to such a server (for example, the contents of the SHUFFLED_3 table), the server acquires the ability to recognize. Indeed, after self-study, the server will be able to determine the type (suit) of the input playing card and return this information online.
Great, but not good enough for us! We want that the server will be able to recognize distorted star shapes at the same time. For this, we need to send him the SHUFFLED table’s content, too.
So, after getting acquainted with the object samples, our server can recognize similar ones.
All this strongly resembles the story of a little girl (remember the photo at the beginning of the article?) who could recognize geometric shapes after receiving a toy. Obviously, when she sees four cards of different suits for the first time, she will be able to decompose the rest of the playing deck into suits as well.
Humans do not need to specifically learn recognition, as well as our server with software support for AI tables.
Such a server is a full-fledged artificial intelligence system because it is autonomous, and we will not be able to establish who is engaged in online recognition, a person or a computer (based on A. Turing’s test).
About Deep Learning
The well-known Deep Learning technology is the first successful attempt to recognize objects where significant geometric distortions are allowed. Handwritten numbers are a classic example. You can find a colorful introduction to this complex and costly technology here.
Deep Learning has become widely popular due to the support of large corporations (Google, Facebook), the constant flow of investments, and… catchy terminology (for novice researchers, “artificial intelligence,” “building and training neural networks,” etc. sounds much more attractive than, for example, “optimization of database queries”). Today even modern processors are designed with hardware support for Deep Learning.
However, we are forced to admit that Deep Learning technology has nothing to do with human intelligence and understanding the mechanisms of functioning of our brain structures.
The proof is, as mathematicians say, the presence of counterexamples. Let’s explain. A counterexample is an example that disproves a certain statement. For example, the statement “All prime numbers are odd” is refuted by the prime and even number 2.
In the case of Deep Learning, counterexamples are, in particular, examples #1 and #2 above, which do not cause difficulties for any of us, but are inaccessible to this technology.
We don’t need special training to recognize objects. We extract all the necessary information for the following recognition by analyzing a small set of them (self-study).
The main difference between the AI Tables method and Deep Learning is in preparation for recognition. AI Tables do this preparation by itself and within itself. At the same time, Deep Learning uses train data, a huge array of images (analogous to the SEPARATED table), picked up in advance by a human outside this system.
About AI Tables
While describing above various manipulations of tables and remote server operations, we didn’t mention that the AI Tables software implementation already exists, its text is shown below, and the shapes-recognition library can be installed from PyPI right now —
pip install shapes-recognition==3.1.3
import shapes_recognition as sr
sr.init()
# S E L F - S T U D Y
# ---------------------------------------------
# Get a list of shapes for self-study
list_self_study_in = sr.get_self_study_data('DATA_SELF_STUDY')
# Self-study
list_self_study_out = sr.self_study(list_self_study_in)
# Show self-study result
sr.show_self_study_results(list_self_study_in, list_self_study_out)
# ---------------------------------------------
# R E C O G N I T I O N
# ---------------------------------------------
# Get a list of shapes for recognition
list_recognition = sr.get_recognition_data('DATA_RECOGNITION')
# Recognition
recogn_dictionary = \
sr.recognition(list_self_study_out, list_recognition)
# Show recognition result
sr.show_recognition_results(recogn_dictionary)
# ---------------------------------------------
All the results presented earlier: sorting geometric shapes, their recognition, and visualization of AI tables in the form of images (*.png) — were not done manually but by this tiny program (see examples #1 — #4). It has been tested on the following platforms: Ubuntu 20.04, Windows 11, and macOS Ventura. You can find the code of the examples above and all data samples on our website.
About The Program and Data
About the Program. We have deliberately eliminated the input parameters that affect the course and results of calculations. For example, there is no parameter “threshold,” which seemed to be mandatory for recognition. The absence of parameters simplifies the application but leads to some limitations. The presence of parameters deprives the program of autonomy and, thus, the ability to be called an AI system.
The number of rows in the AI table is always 6 (a fixed number of instances of each type), and the number of columns can vary from 2 to 4 (the number of different data types). Thus, before starting the program, the DATA_SELF_STUDY folder must contain a certain number of image files: 12, 18, 24 (one of these). The DATA_RECOGNITION folder can be empty or contain any number of images: “candidates” for recognition. The name and order of the files do not matter. All results (tables and text files) are stored in the RESULTS folder, which is cleared each time the program is started. You will find the recognition results, including those that did not fit in the RECOGNIZED table, in the recognition.txt file.
The program operation is entirely determined by the “training set,” i.e., by the set of geometrical shapes in the DATA_SELF_STUDY folder.
Running Time. As you see from the screenshot of PyCharm’s Terminal, the self-study part from Example #1 took 36 min. 01 sec, and recognition — 53 min. 45 sec. (Processor: 3 Ghz 6-Core Intel Core i5, Memory: 8GB DDR4).
About the Data. The AI table cells are filled with images. These are color or grayscale images of 200 x 200 pixels in which thin contrast lines form the content part. The thinner and more contrasting the lines, the better. The background color must be white. The content should comprise only a small fraction of the entire image, i.e., the white color should strongly prevail.
About the Problem with MNIST Data. We have to discuss the difficulties of using the MNIST database, which serves as the de facto standard in artificial intelligence. Unfortunately, our image format of 200 x 200 pixels and the extremely compressed MNIST images of 28 x 28 pixels are incompatible. The computational limitations of Deep Learning only dictated this compression, i.e., the data was deliberately degraded for the sake of a particular procedure. We express our deepest regret at the inaccessibility of the original digitized data (i.e., before the compression procedure). We do not find a rational explanation for this.
The article (Cheng-Lin Liu et al., Handwritten digit recognition: investigation of normalization and feature extraction techniques (2003), Central Research Laboratory, Hitachi Ltd., 1–280 Higashi-koigakubo, Kokubunji-shi, Tokyo 185–8601, Japan) provides a table (see below) with selected samples of the original NIST images that we used for trial testing.
Below are the results of comparing some numbers, combined in the following pairs: 2–4, 3–5, 4–6, and 5–7. The original, unsorted table, SHUFFLED, and the sorted table, SEPARATED, are side by side.
The result is positive; try to find the only mistake yourself.
The success of this sorting means that the program picks up the difference in the compared numbers and, therefore, AI Tables is also capable of recognizing NIST handwritten numbers along with the Deep Learning procedure. See more in our new article “Handwritten Digit Recognition (MNIST Dataset) without Using Neural Networks”.
Shapes Similarity
Let’s take another look at Example #1 above. Obviously, of the four types of figures considered, the distorted five- and six-pointed stars are the most difficult to distinguish from each other. However, we can easily recognize them or, in other words, correctly assess their degree of similarity. We suggest modeling this recognition capability by the get_similarity() function from the shapes_recognition package, which has indirectly proven its effectiveness in building AI tables.
Now, let’s evaluate the potential of get_similarity() directly by conducting a large-scale computing experiment. To do this, by reusing hdss, we generate two groups of distorted five- and six-pointed stars to the total of 50 stars of each type in the group.
In adequate work, similarity = get_similarity(), the degree of similarity of stars of the same type (or stars belonging to the same group) should be greater than that of stars of different types, i.e., stars belonging to different groups.
Let us make sure of it. At the same time, we note that the reliability of the results of our experiment is quite high, as it is based on the analysis of 4,950 comparisons.
The figure above shows the results of processing stars from different groups in blue and the stars within the group in orange. As we can see, the “orange histogram” is shifted to the right relative to the “blue,” i.e., the same type of stars, when compared actually show higher values of similarity. At the same time, the size of the overlap area of the histograms reflects the probability of error in decision-making. With values of similarity > 0.66, you can reliably identify an unknown star.
Note. Everything necessary is available on our website for those who want to repeat this experiment.
Read more.