The twenty-first century has witnessed a tremendous growth in computing technology. Artificial Intelligence is one of the most important aspects of Information Technology that has received a boost due to this growth. Computers and their accessories have become increasingly cheaper, allowing more people to participate in the evolution of the technology. The interest to develop machines with human-like thinking capabilities began in the mid-twentieth century with the invention of the Turing machine. Since then, engineers, scientists, and software programmers have successfully created programs and algorithms that enable machines to act autonomously. The future presents a vast array of possibilities, mainly spurred by advances in technology and contribution to the field. An often asked question is whether future AI will be capable of developing sentient thoughts. This research explores the principles, supporting technologies, breakthroughs, and impacts of AI technology in modern society.
Principles Behind Artificial Intelligence
Throughout the AI evolution, experts have divided it into various mostly unrelated subfields. These subfields could be technical considerations, goals of development (e.g. robotics & machine learning), particular tools in use (such as neural networks) and philosophical considerations. AI mainly focuses on research into the areas of reasoning, perception, natural language processing, learning, planning, knowledge representation and the capability of motion and spatial manipulation. The discipline uses multiple approaches to these problems, including: traditional symbolic intelligence, computational intelligence, and statistical methods. Thus, the field of AI employs operating principles from fields of computer science, mathematics, linguistics, philosophy, and psychology among many others. This section covers the principles of Artificial Intelligence.
FIGURE 1: Bayesian Statistics stimulating AI thinking using Artificial Neural Network (Fattal, 2018).
The main principle behind human-like machine thinking is the ability to learn. Intelligent machines take in data about a place, observe the pattern & trend and then model their behavior to best match their environment. Various subfields of AI are dedicated to artificial learning. Machine Learning and Deep Learning are computational methods mainly developed to effect learning in computers. The fields of big data and statistical methods mainly concern themselves with the acquisition of data and determination of trends that occur within sets of information (Domingos & Lowd, 2009). Mathematics helps create algorithms that determine patterns and predict future trends. Linguistics, psychology, and philosophy contribute to AI learning by observing knowledge acquisition and usage in humans.
Mind Simulation and Cybernetics
This is the arm of Artificial Intelligence mainly concerned with building machine systems that mimic the human body and mind. The field of robotics has come a long way, from those machine-like structures used in early twentieth-century manufacturing to the modern, life-like animatronics in use. Computer scientists and engineers have also been able to create neural networks that can react to real-world stimuli the same way organisms do (Bor, 2015). Modern robots pick up environmental data, analyze the information and develop action plans that enable a proper response to the conditions. The creation of such systems is a result of the interaction between the fields of neurobiology, information theory, and cybernetics.
Symbolic Reasoning and Reduction
With the advent of computers, the field of AI needed techniques that represented the real world as a set of machine-readable symbols. Thus, various experts demonstrated methods to reduce human intelligence into symbolic manipulation. The first approach to symbolic reasoning was cognitive simulation. This technique utilizes computer programs that simulate human problem-solving methods. Logic-based reasoning eliminated the need to simulate the human thinking process, instead utilizing abstract problem-solving and reasoning, regardless of how the solution is arrived at (Stiegler, Dahal, Maucher & Livingstone, 2017). Anti-logic reasoning suggests the use of ad-hoc solutions since there’s no general principle that can capture human thinking. Knowledge-based reasoning has led to the expert AI systems in use today.
Technologies that Fuel the Rapid Development of AI
The field of Artificial Intelligence has experienced surges and spikes since the 1960s. Over the past two decades, however, the field has seen an unprecedented growth, with developments in various fields. This section discusses a few technologies that have spurred the growth and development of AI machines.
Information and Communication Technology
Since the invention of the Turing machine in the 1940s, developments in computing technology have brought us closer to machine intelligence. Development of transistor technologies in the 1950s and 1960s gave rise to machines that could perform calculations based on symbols that represent real-world quantities (Chrisley, 2003). The growth of the internet has spurred research and development by enabling worldwide collaboration and access to resources and information on AI. Internet connectivity has also led to the creation of technologies that allow computers to learn on a global scale. Growth of the Personal Computer (PC) market has also spurred AI by facilitating the collection of data from a diverse population, thus enabling the observation of universal trends.
Advances in manufacturing technology have led to the creation of machines that could physically mimic the human body. Research and developments in the field of material science have enabled experts and companies to create textures and materials that enable machines to sense changes in the environment around them. The development of Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) enable the precise manufacturing of component parts, with simulations to allow for testing before development (Spector, 2006). The fields of mechatronics and robotics enable the creation of machines with bodies built as a physical fit for the environment they occupy. Further advances in manufacturing will enable the development of machines with human-like sensitivity and response to environmental stimuli.
Biotechnology is set to be one of the greatest beneficiaries of AI development. Conversely, advances in biotechnology have also spurred the development of AI technology. Research into the field of genetics led to the discovery of the DNA and the mapping of the human genome. These discoveries help engineers design systems that could mimic humans on a molecular level (Spector, 2006). Studies into the physiology of various plants and animals help designers create machine systems that optimally utilize natural resources. Observing the biological and chemical makeup of organisms leads to the development of systems that can fight the challenges that we, humans face when interacting with our environment. Biotechnology has allowed for the study of patterns in the brain’s neurons when thinking, further boosting developments in the creation of artificial neural networks.
Breakthrough – Driverless Cars
The automotive industry is, perhaps, one of the greatest beneficiaries of advances in AI technology. AI is partly responsible for such systems as satellite navigation, cruise control, assisted parking and traffic monitoring. One of the greatest breakthroughs, however, is the realization of mainstream, driverless cars (Trappl, 2016). While not a new phenomenon, the number of driverless cars on public roads has increased tremendously over the past 5 years. One huge player in the development of autonomous cars is Tesla Motors, whose CEO- Elon Musk-is wary of advances in AI. Even then, the company has gone on to create various driverless models, including the Tesla Model 3, predicted to enter mass production in 2019.
Artificial Intelligence is central to the operation of driverless cars. These cars harvest road and traffic data using a wide range of sensors, interpret the data then enact the correct procedures in reaction to the road conditions. These sensors detect, among other things, weather conditions, lane traffic, and pedestrian traffic among others. The vehicles’ on board computers then process this information to determine the most suitable course of action (Trappl, 2016). Several tech giants such as NVidia and drive.ai have invested large amounts of R&D in the development of intelligent systems from autonomous vehicles. These companies use techniques of deep learning to determine typical human behavior in driving conditions to steer the vehicle along the most suitable path.
Challenges for AI Researchers/Experts
Artificial Intelligence promises unlimited potential for the fields of manufacturing, transport, entertainment, education, and medicine among others. These fields could only benefit from AI through iterative processes of development, testing, and improvement. Throughout its evolution, AI has faced a lot of challenges. These challenges could be a result of technical, social, financial or ethical considerations. This section examines the in-field challenges for AI researchers and scientists.
FIGURE 2: In-depth AI academic research advancement timeline (Mauro, 2016).
The last few years have seen a tremendous growth in complex software systems that interact with human data. Artificial Intelligence now touches on most personal aspects of our lives. This interaction between systems and data has fostered the growth of the information economy, which has changed how businesses run. Ethical issues arise whenever personal data is used to improve business. One great ethical challenge of AI is to ensure that the gains of the technology benefit the society as a whole (Prokopenko, 2014). The mining of personal data to improve business for individuals and businesses that can afford it leans on the unethical side of things. The handling of this data also presents a challenge, as individuals would always like to know that their data is safe.
Central to the implementation of AI technologies is obtaining, processing and retaining data. The effectiveness of each AI system depends on the quality of data in use. The data used to analyze trends and behavior patterns should represent a large subset and is balanced. Data inputs with errors could cause the systems to predict incorrectly. AI techniques also require computers that can perform numerous calculations rapidly (Prokopenko, 2014). With growth in volumes of data, experts require even more processing power to perform these calculations. Thus, AI can only make significant progress if the underlying computer technologies keep up with the volumes of data being processed.
A major issue in modern AI implementation is the establishment of public trust. Most members of the public are concerned that developments in AI technologies could lead to a scarcity of jobs, impacting the quality of life. Most people with no idea how AI technology works always think of AI machines as sentient beings out to rule the world (Trappl, 2016). Most movies depict the AI takeover as a doomsday scenario, sparking public fear over the capability of these technologies. Most individuals also have issues with computers storing their data and using it for prediction and behavior modeling. To solve these trust issues, AI companies should educate the public on the operation and benefits of AI systems.
The Impact of AI on Science, Technology, and Society
Artificial Intelligence has far-reaching effects in scientific study and human interaction. AI systems have changed the way we perform experiments, especially on impractically small or large objects. Particle physics, the branch of science that studies the universe using small collisions, lends most of its findings to developments in AI (Spector, 2006). AI helps extract important data by filtering out unwanted noise signals, leaving behind the trace signal. AI machines have also helped research in the fields of astronomy, medicine, and geographical exploration.
AI has also helped foster technological developments. AI systems have been used to precisely manufacture parts for electronics used in most modern systems and devices in use. AI has also helped in research and development, by providing technologies that enable safe prototyping and testing of new products (Chrisley, 2003). Artificial Intelligence has helped boost manufacturing technology by enabling the creation of precision design and manufacturing systems. The field of technology is set to benefit from further advancements in technology such as: nanotechnology, quantum computing, driverless automobiles and smart systems (home, farms e.t.c).
It is hard to ignore the impact of AI technology on modern societal living. Social media, an unmissable 21st-century phenomenon, owes its existence to AI technologies. These technologies use algorithms and personal data to help determine one’s network and preferences (Prokopenko, 2014). Besides facilitating communication, AI on social media affects commerce. Most AI companies analyze personal data to find preferences and trends in consumption. They then sell this data to vendors who develop their products in line with their target audience’s tastes and preferences.
AI also helps modern living by streamlining other aspects of existence. For instance, AI has boosted business worldwide by connecting sellers and buyers in geographically disparate locations (Spector, 2006). AI has also spurred business by observing trends in the population and suggesting the best business moves. In future, AIs will run businesses through applications such as Enterprise Resource Planning (ERP) and Customer Relations Management (CRM) systems. AIs will also help in product delivery using unmanned vehicles and other delivery systems.
Besides business, AI systems impact modern medicine and healthcare systems. Over the past couple of years, Health Information Systems have gained popularity in healthcare centers. These systems connect patients, caregivers, and families and help them determine the best medical interventions. Such data helps caregivers gain the insight of other professionals when attending to their subjects (Bor, 2015). These systems also help families manage their patients’ health in the absence of a caregiver. In the future, these systems will have gathered enough data to correctly diagnose infections and give prescriptions to patients.
Other areas of modern life that could benefit from AI include: space exploration, education, entertainment, construction, and utility operations.
This research has covered the basis, principles, breakthroughs, challenges, and impacts of artificial intelligence. The main operation principles for AI are learning, simulation & robotics and reasoning & reduction. These principles have come as a result of breakthroughs in various technologies. The main technologies that have led to AI improvement include Information and Communication Technology (ICT), Manufacturing Technology and Biotechnology. The surging popularity of autonomous cars represents one recent significant breakthrough in the field of AI. While promising, AI faces technical, social, financial and ethical challenges. If these challenges are addressed, AI could effectively revolutionize science, technology, and society.