By Dr. Avishek Ranjan, IIT Bombay, India
Intelligent behavior of computers, which are the building blocks of the digital world, is usually termed as Artificial intelligence (or AI), to distinguish them from the “natural” intelligence of humans and animals. In the context of AI, a term we hear often is machine learning (or ML) which is the ability of computer algorithms to recognize patterns from large amounts of data and sometimes even learn from feedback. ML algorithms based on statistical models enable computers to make predictions or decisions, thus obviating the need of computer programming from scratch. Research in AI has been ongoing since the 1950s, with many ups and downs, but several recent developments in the field of AI-ML, for instance towards building self-driving cars, have re-invigorated the research. This has been enabled by the exponential growth in the availability of computing hardware, such as graphical processing units (GPUs), and the decrease in hardware cost. In this article, I will first discuss one such recent advancement in AI and how it is raising fundamental questions among educators about the students’ learning behaviour, goals/deliverables of higher education, and assessment methods. I will share my concerns on the impact this can have on higher education. Any technology also has its advantages if used appropriately and so has AI. I will discuss two examples of how AI can accelerate research and impact lives. Finally, I will end with a philosophical note on the future of AI in higher education.
ChatGPT – a toy for the students that is making them lazy and dumb
In November 2022, the company OpenAI released the online AI-based interface “ChatGPT”, where the acronym “GPT” stands for “Generative Pre-trained Transformer” [1]. One can ask it about almost anything, including questions about philosophy, history, mathematics, or science, and get a well-structured response, both in content and size. This is much unlike the plethora of links offered by search engines such as Google, where it takes a lot of effort to collect and collate information. Sometimes the answers from ChatGPT do have some errors, which seem to make it more “human-like”. Unlike search engines, here there is an option for the user to give feedback on the quality of the response and ChatGPT re-generates the answers. In a few iterations, the machine “learns” and errors, if present, get minimized. So, in a way, here the user is also a “product” on which the software is being tested and improved. (Without us realizing it, in our daily-life AI has already been used for a long time to give product recommendations on Amazon, customized ads on Google, suggested videos on YouTube, etc., based on our browsing history.) Needless to say, students from all around the world have flocked to ChatGPT to find answers to their homework problems, and sometimes even in computer-based online exams. The “homework machine” of the 1960s comic books is finally a reality. And every student who asks the same question gets a different response, making plagiarism detection extremely difficult. When asked the question, “Do you think using ChatGPT for assignments by students is unethical?”, the first paragraph of a “well-rehearsed” response by ChatGPT is: “As an AI language model, I am programmed to provide information and answer questions to the best of my ability. I do not have the ability to make moral judgments or ethical decisions about how my responses are used. However, it is important to consider academic integrity and ethics when using any source of information, including AI language models like myself.”
While it may seem harmless as a tool to aid the learning activity, having a computer think on one’s behalf may lead to consequences such as a lack of incentive to learn anything at all, laziness, and confusion about what is unethical and what is not. Why should a student learn about math and programming if he/she has a free-to-use tool that can do all the work, including the “intelligent” part that includes logical reasoning. This is why this new development in AI is not the same as calculators replacing slide-rules for arithmetic calculations – there the laborious, time-consuming calculations were made faster [2]. The “neural networks” in ML, the mathematical layers which “learn” from the available data by mapping inputs to outputs, are inspired by naturally occurring neural networks in the human brain [3]. So, it will be tragically ironical if humans do not feel the need to train their own minds.
It has been claimed that the power of AI has been over-hyped, and there is nothing original or truly novel that can be created. It is true that as long as there is a need of large amounts of “training data” (for example, in the context of self-driving card these can be traffic rules, city road maps, traffic history and driver/pedestrian behaviour, etc.), AI cannot be thought of as having intellectual insight (for example being able to replace Newton or Einstein explaining why an apple would fall downwards from a tree!). In his NYT article [4], Noam Chomsky writes: “The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.” and goes on to add that: “AI’s deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.”
While this criticism may sound too harsh, the points raised are genuine. Education is essentially the training for the mind so that it is able to perform the best with the least number of resources (indeed, infants and toddlers learn the mother’s language with much less vocabulary!). It is true that a lot of effort, time, energy and resources are needed in educating the human mind. Whether an AI equivalent of a human mind (also called AGI or artificial general intelligence) for a specific purpose (say, that of an employer) is more or less efficient than humans is debatable. However, there is evidence from the past that, in general, there is a natural tendency of the human mind to become “lazier” if it has access to powerful tools. How many of us remember phone numbers, or can do the mental arithmetic that our fathers could with the same level of education?
Advantages of using AI-ML in teaching and research
There is a lot of promise in the use of AI-ML as a tool for both education and research. Several mathematical methods that are its backbone such as statistical regression, principal component analysis, etc., have been used for many years. Out of many advantages of AI-ML, I briefly discuss two examples – the first from the field of medical imaging and the second from my own field of research in computational fluid dynamics (CFD).
Advanced imaging techniques such as MRI are now widely used in medical diagnostics, for the detection of tumors, blockages, and other abnormal conditions etc. Timely detection and analysis of the images is crucial for timely diagnosis. Here, if a database of several patients around the world for which historical data connecting the abnormality to the disease is available, ML methods can be used for accurate and timely diagnosis of life-threatening diseases such as cancer. In general, the larger the data for training the mathematical prediction models, the more accurate the result of analysis. This is one field where there can be a positive direct impact on saving human lives.
Next I talk about my own field of research where there can also be an impact of AI-ML methods albeit less direct. Fluid dynamics is the study of moving fluids (liquids and gasses), with applications in a wide range of industries such as aerospace, automotive, chemical and power generation, in the prediction of weather, etc.
Often fluid dynamics is coupled with thermodynamics, which deals with heat and its transfer to useful work and other forms of energy. The bases of these fundamental topics are the equations of conservation of mass, momentum, and energy. It is often difficult to perform laboratory experiments to exactly determine motion, for instance to know the total drag or resistance on a car. Fluid dynamicists write computer programs for high end computers to solve these equations numerically while trying to minimize the errors. The equations for momentum conservation, also called the Navier-Stokes equations, are essentially Newton’s second law of motion, which says that the rate of change of momentum (mass times velocity) equals the total force acting on an object. These equations are complex partial differential equations with variations in both time and space. For almost all practical applications, these equations are notoriously difficult and computationally costly to solve even with high end computers. For instance, if it takes three or more days to solve the equations to predict the weather after three days, then there is no point of doing so! Here, the ML models based on the neural networks could be of huge help. If several sets of past data are available which can be correlated with predictions, and if the ML models are trained on that data, the prediction for the weather after three days can come in just a few hours. Once the training is complete, which can be done much in advance, the predictions can usually arrive very quickly, as the input-to-output mapping is already available [5]. Timely prediction can be life-saving for situations such as a hurricane travelling towards a coastal region. Two of the requirements, however, are that the training data should be of a roughly similar situation (measurable by similar initial conditions such as temperature, humidity, geographical location, and wind speed for weather prediction), and that there should be sufficient data available. Of course, this is not always the case. Without sufficient data the predictions can have big errors. The choice here can be a difficult one – early or timely prediction with large errors vs an accurate prediction which comes too late!
Perspectives and final thoughts
Research in AI is progressing at a rapid pace and is expected to have tremendous impact on society. There is already a huge amount of money invested at both laboratory and industrial scales. Compulsory teaching of AI for all engineering students has started in some institutes such as the one in which I teach, despite many reservations. The arguments in favor of this change are that it is necessary for the graduates to be aware of the capabilities of AI and to equip them for industry and businesses. But the hype and frenzy should not lead to removal of fundamental courses from the course curriculums. If this happens it will be a self goal. Ironically, it seems quite certain that a lot of jobs will be replaced by AI-based tools such as ChatGPT [6]. The onus of up-skilling the workforce into areas which are difficult to be replaced on the educators. Perhaps teaching methods should be more oriented towards aspects of synthesis rather than analysis, with a minimal component of rote learning. The teaching-learning process will probably evolve from “knowledge transfer & its assessment” mode to “problem-solving and hands-on learning” mode, where the instructor is more of a facilitator whose work is to manage the learning experience. Teaching must incorporate behavioral aspects such as teamwork, emotional resilience, moral responsibility, etc., and the promotion of creativity and intuitive reasoning.
Human brains are much more than computational minds, for example in the way they process subjective experience, emotions, etc. It is true that humans cannot match computers in terms of the speed and accuracy of mathematical calculations. However, the most intelligent, creative, or capable humans are far superior, at least at present, in terms of what they can invent (such as computer chips), discover in fundamental scientific theories such as general theory of relativity, or produce in poetry or painting that is awe-inspiring. A lot about human brains remains poorly understood. Many “supernatural” events that are unexplained by present scientific knowledge, such as some children remembering their past lives, are arrogantly brushed-off as “pseudoscience”. (On the other hand, we are surrounded by many dogmas that must be rightly rejected by scientific, rational thinking and logical reasoning). Much of what is intriguing about AI, for example its ability to create human-like language or write computer programs, greatly depends on the total knowledge at the present stage that is documented on the internet. The use of AI as a tool for teaching and research must be encouraged as long as it helps bring efficiency in the use of resources [7]. However, if it is too energy and resource intensive then other concerns such as the warming climate and well-being of all living entities, in particular those who are most vulnerable, must enter the narrative. The presence of AI-based technologies is on the rise and will continue to be in the near future. Will patients in the future trust a robotic hand, which would have the utmost precision, for a critical surgery or the hand of an experienced surgeon who may have a lower success rate? I am not too sure. Would the students of the future have a personal tutor in their pockets tailored to their learning needs? Probably yes.
References:
1) https://openai.com/blog/chatgpt
2) https://www.wired.com/2004/09/slide-rule-still-rules/
3) E. Maggiori (2023) Smart Until It’s Dumb: Why Artificial Intelligence Keeps Making Epic Mistakes. Applied Maths Ltd.
4) https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html
5)https://www.neuralconcept.com/post/applying-machine-learning-in-cfd-to-accelerate-simulation
6) T. Eloundou, S. Manning, P. Mishkin, and D. Rock. (2023) GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. https://arxiv.org/pdf/2303.10130.pdf
7) P. R. Sarkar (1989). Prout and neo-humanism. Prout in a nutshell, 4(Part 17).