Human intelligence, mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one’s environment .
In science, the term intelligence typically refers to what we could call academic or cognitive intelligence. In their book on intelligence, professors Resing and Drenth (2007) answer the question ‘What is intelligence?’ using the following definition: “The whole of cognitive or intellectual abilities required to obtain knowledge, and to use that knowledge in a good way to solve problems that have a well described goal and structure .
Intelligence is defined by many scientists in different ways. The common view of scientists is that intelligence, heredity and the environment are common products. Along with these definitions, many more definitions of intelligence are made.
“Intelligence is good reasoning, judgment and self-healing capacity”
“Intelligence is the process of abstract thinking”
“Intelligence is the ability to react to the environment”
“Intelligence is the capacity to learn, solve problems, create new products and communicate.”
“Intelligence is that the brain takes information and analyzes it quickly and accurately.”
Intelligence has been thought and discussed by mankind since ancient times. However, intelligence has often been misunderstood.
Development of Artificial Intelligence
Artificial Intelligence, or simply AI, is the term used to describe a machine’s ability to simulate human intelligence. Actions like learning, logic, reasoning, perception, creativity, that were once considered unique to humans, is now being replicated by technology and used in every industry .
The concept of Artificial Intelligence dates back to the early days of information science. In fact, from a philosophical point of view, It can be said that the foundations of Artificial Intelligence are based on the logical inferences of Aristotle.
Artificial intelligence; It is defined as the ability of a computer or a computer-controlled machine to perform tasks related to higher mental processes such as reasoning, meaning, generalization, and learning from past experiences, which are generally assumed to be human-specific qualities.
The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, finance, healthcare, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning .
In the 17th century, Dekart suggested that the bodies of animals were complex machines.
In 1642, Blaise Pascal showed that he can make the first numerical calculator that only human intelligence can do.
In 1673, the famous scientist Laibniz developed a calculator that can multiply and divide.
In 1673, the famous scientist Gottfried Leibniz developed a calculator that can multiply and divide by advancing this design of pascal.
In the 18th century, vending machines that mostly imitate people and animals were studied. These studies have formed the foundations of artificial life, which is one of the sub-branches of artificial intelligence today.
In the 1800s, George Boole created one of the structures used in artificial intelligence studies by presenting his logical algebra and logic in a mathematical form.
Also in this century, Charles Babbage and Ada Byron (Lady Lovelace) worked on programmable mechanical calculators.
In the 20th century, artificial intelligence studies gained great momentum. Bertrand Russell and Alfred North Whitehead first published the book “Principia Matematica”. This led to the formation of formal logic.
In 1943, the first study in the field of Artificial Intelligence was done by Warren McCulloch and Walter Pitts. This study is a computational model using artificial nerve cells. This model is based on the logic of propositions, physiology and the calculation theory of Turing.
The formation of the name artificial intelligence dates back to the 1950s.
In 1957, Newell and Simon developed General Problem Solver, the first program produced according to the “thinking like a human” approach.
Between 1952 and 1960, Arthur Samuel at IBM developed the first checkers program. This checker was the game-playing program.
In 1958, John McCarthy, father of the name artificial intelligence, developed the lisp programming language.
In 1967, the program Dentral was developed by Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland from Stanford University. The Dentral program was the first knowledge-based program to analyze organic chemical compositions.
In 1972, The Prolog programming language became an important tool used in many artificial intelligence research.
In 1974, Edward Shortliffe developed MYCIN, which was considered the first expert system.
In 1980s, lisp-based computers were produced and released. the first expert system development systems and commercial artificial intelligence applications appeared.
In the late 1990s, with the widespread use of the internet, search engines and other artificial intelligence-based programs were developed.
In the early 2000s, smart toys using artificial intelligence were launched.
Artificial Intelligence Tests
It is the ability to think, which is a feature of human intelligence. Can computers really think like humans? To find the answer to this question, Coined by computing pioneer Alan Turing in 1950, the Turing test was designed to be a rudimentary way of determining whether or not a computer counts as “intelligent”.
Turing Test in Artificial Intelligence
Imagine a game of three players having two humans and one computer, an interrogator (as human) is isolated from other two players. The interrogator job is to try and figure out which one is human and which one is computer by asking questions from both of them. To make the things harder computer is trying to make the interrogator guess wrongly .
The “standard interpretation” of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. If interrogator wouldn’t be able to distinguish the answers provided by both human and computer then the computer passes the test and machine(computer) is considered as intelligent as human .
Chinese Room Test
John Searle’s thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. The question Searle wants to answer is this: does the machine literally “understand” Chinese? Or is it merely simulating the ability to understand Chinese? Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils, erasers, and filing cabinets .
Searle could receive Chinese characters through a slot in the door, process them according to the program’s instructions, and produce Chinese characters as output. Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. (“I don’t speak a word of Chinese,” he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either .
Application Fields of Artificial Intelligence
Main application areas of artificial intelligence;
The expert systems are the computer applications developed to solve complex problems in a particular domain, at the level of extra-ordinary human intelligence and expertise . It is considered at the highest level of human intelligence and expertise. It is a computer application which solves the most complex issues in a specific domain. The expert system can resolve many issues which generally would require a human expert . They can advise users as well as provide explanations to them about how they reached a particular conclusion or advice .
The concept of expert systems was first developed in the 1970s by Edward Feigenbaum, professor and founder of the Knowledge Systems Laboratory at Stanford University .
Expert systems have played a large role in many industries including in financial services, telecommunications, healthcare, customer service, transportation, video games, manufacturing, aviation and written communication .
There are many examples of expert system. Following are examples of Expert Systems.
DENDRAL: Expert system used for chemical analysis to predict molecular structure .
PXDES: It could easily determine the type and the degree of lung cancer in a patient based on the data .
CaDet: It is a clinical support system that could identify cancer in its early stages in patients .
MYCIN: One of the earliest expert systems based on backward chaining . It could also recommend drugs based on the patient’s weight .
DXplain: It was also a clinical support system that could suggest a variety of diseases based on the findings of the doctor .
Genetic algorithms are a specific approach to optimization problems that can estimate known solutions and simulate evolutionary behavior in complex systems .Genetic algorithms were first used by Holland (1975).
Genetic algorithms are based on the ideas of natural selection and genetics. They are commonly used to generate high-quality solutions for optimization problems and search problems . The genetic algorithm repeatedly modifies a population of individual solutions. At each step, the genetic algorithm selects individuals at random from the current population to be parents and uses them to produce the children for the next generation .
The basic idea is to try to mimic a simple picture of natural selection in order to find a good algorithm. The first step is to mutate, or randomly vary, a given collection of sample programs. The second step is a selection step, which is often done through measuring against a fitness function. The process is repeated until a suitable solution is found .
Fuzzy Logic is an approach to variable processing that allows for multiple values to be processed through the same variable . The idea of fuzzy logic was first advanced by Dr. Lotfi Zadeh of the University of California at Berkeley in the 1960s. Fuzzy logic is an approach to computing based on “degrees of truth” rather than the usual “true or false” (1 or 0) Boolean logic on which the modern computer is based .
The fuzzy system, there is no logic for absolute truth and absolute false value. But in fuzzy logic, there is intermediate value too present which is partially true and partially false . The conventional logic block that a computer can understand takes precise input and produces a definite output as TRUE or FALSE, which is equivalent to human’s YES or NO. The inventor of fuzzy logic, Lotfi Zadeh, observed that unlike computers, the human decision making includes a range of possibilities between YES and NO .
In recent years, the number and variety of applications of fuzzy logic have increased significantly. The applications range from consumer products such as cameras, camcorders, washing machines, and microwave ovens to industrial process control, medical instrumentation, decision-support systems, and portfolio selection .
Artificial Neural Networks
Artificial neural networks are one of the main tools used in machine learning. As the “neural” part of their name suggests, they are brain-inspired systems which are intended to replicate the way that we humans learn . The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated . Neural network changes were based on input and output. Basically, we can consider Artificial Neural Networks as nonlinear statistical data. That means complex relationship defines between input and output .
Artificial Neural Networks are composed of multiple nodes, which imitate biological neurons of human brain. The idea of Artificial Neural Networks is based on the belief that working of human brain by making the right connections, can be imitated using silicon and wires as living neurons and dendrites. The human brain is composed of 86 billion nerve cells called neurons. They are connected to other thousand cells by Axons. Stimuli from external environment or inputs from sensory organs are accepted by dendrites. These inputs create electric impulses, which quickly travel through the neural network. A neuron can then send the message to other neuron to handle the issue. The neurons are connected by links and they interact with each other .
Artificial neural networks use different layers of mathematical processing to make sense of the information it’s fed. Typically, an artificial neural network has anywhere from dozens to millions of artificial neurons—called units—arranged in a series of layers. The input layer receives various forms of information from the outside world. This is the data that the network aims to process or learn about. The majority of neural networks are fully connected from one layer to another. These connections are weighted; the higher the number the greater influence one unit has on another, similar to a human brain. As the data goes through each unit the network is learning more about the data. On the other side of the network is the output units, and this is where the network responds to the data that it was given and processed .
Natural Language Processing
Natural language processing (NLP) is the ability of a computer program to understand human language as it is spoken . Natural language processing is a broad term that encompasses many different techniques that allow computers to understand human speech and text . Natural Language Processing, usually shortened as NLP, is a branch of artificial intelligence that deals with the interaction between computers and humans using the natural language. The objective of NLP is to read, decipher, understand, and make sense of the human languages in a manner that is valuable . For example, NLP makes it possible for computers to read text, hear speech, interpret it, measure sentiment and determine which parts are important .
NLP can be used to analyze a vast array of different types of speech and text data from different contexts. For example, it could be used to analyze or transcribe audio recordings of incoming customer service calls or help extract relevant clauses from a legal contract .
Computerized Pattern Recognition
Pattern Recognition is the process of distinguishing and segmenting data according to set criteria or by common elements, which is performed by special algorithms .
Pattern recognition is the process of recognizing patterns by using machine learning algorithm. Pattern recognition can be defined as the classification of data based on knowledge already gained or on statistical information extracted from patterns and/or their representation. One of the important aspects of the pattern recognition is its application potential .
Pattern Recognition Techniques . There are three main models of pattern recognition:
Statistical: to identify where the specific piece belongs.
Syntactic / Structural: to define a more complex relationship between elements
Template Matching: to match the object’s features with the predefined template and identify the object by proxy.
Voice recognition systems provide important benefits in establishing communication between the user and computers. In particular, they are systems for commanding the computer, dictating text, and voice recognition systems.
Robotics is the type of specialized engineering that deals with the design, construction, operation, and application of robots. A Robot is any man-made machine that can perform work or other actions normally performed by humans . Robots are programmable machines which are usually able to carry out a series of actions autonomously, or semi-autonomously . The robot is a device consisting of mechanical and mechanical units. Robotics is one branch of artificial intelligence.
Roboticists develop man-made mechanical devices that can move by themselves, whose motion must be modelled, planned, sensed, actuated and controlled, and whose motion behaviour can be influenced by programming. Robots are called intelligent if they succeed in moving in safe interaction with an unstructured environment, while autonomously achieving their specified tasks .
Although great advances have been made in the field of robotics during the last decade, robots are still not very useful in everyday life, as they are too clumsy to perform ordinary household chores .
Machine learning is an artificial intelligence-based technique for developing computer systems that learn and evolve based on experience . Machine learning focuses on the development of computer programs that can access data and use it learn for themselves . Machine learning algorithms are reaching a level where they are successfully learning and executing based on the data around them, without the need to be explicitly programmed . Machine Learning is the most popular technique of predicting the future or classifying information to help people in making necessary decisions .
Machine Learning algorithms are trained over instances or examples through which they learn from past experiences and also analyze the historical data. Therefore, as it trains over the examples, again and again, it is able to identify patterns in order to make predictions about the future . Some common machine learning applications include operating self-driving cars, managing investment funds, performing legal discovery, making medical diagnoses, and evaluating creative work. Some machines are even being taught to play games .
Traditionally, investment players in the securities market like financial researchers, analysts, asset managers, individual investors scour through a lot of information from different companies around the world to make profitable investment decisions. In addition, there’s only so much information humans can collect and process within a given time frame. This is where machine learning comes in .
Deep learning or “unsupervised learning” is the next generation of artificial intelligence that lets computers teach themselves .Deep learning is a “deep” neural network that includes many layers of neurons and a huge volume of data. This advanced type of machine learning can solve complex, non-linear problems – and is responsible for AI breakthroughs such as natural language processing (NLP), personal digital assistants, and self-driving cars .
Deep learning techniques program machines to perform high-level thought and abstractions, such as image recognition. The technology has advanced marketing by enabling more personalization, audience clustering, predictive marketing, and sophisticated brand sentiment analysis .
Deep Learning is the most exciting and powerful branch of Machine Learning. It’s a technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers .