AI, ML and Testing
Why we should embrace AI
Artificial Intelligence, a title that can bring up fears of Skynet taking over the world and everybody running away from Terminators or romantic images of Steven Spielberg's AI with little boys being produced to replace sons put into stasis.
It can also bring fear of losing jobs and being replaced by robots so I am going to explore what it means in today's world, get a reasonable definition and show how we as a community could embrace it.
Definition
The below definitions all come from dictionary.com.
1) Dictionary.com 2018:
“the capacity of a computer to perform operations analogous to learning and decision making in humans, as by an expert system, a program for CAD or CAM, or a program for the perception and recognition of shapes in computer vision systems.”
2) Collins English Dictionary - Complete & Unabridged 2012 Digital Edition:
“the study of the modelling of human mental functions by computer programs”
3) The American Heritage® Science Dictionary Copyright © 2002. Published by Houghton Mifflin. All rights reserved:
“The ability of a computer or other machine to perform actions thought to require intelligence. Among these actions are logical deduction and inference, creativity, the ability to make decisions based on past experience or insufficient or conflicting information, and the ability to understand spoken language.”
4) The New Dictionary of Cultural Literacy, Third Edition Copyright © 2005 by Houghton Mifflin Company. Published by Houghton Mifflin Company. All rights reserved:
“The means of duplicating or imitating intelligence in computers, robots, or other devices, which allows them to solve problems, discriminate among objects, and respond to voice commands.”
As you can see each dictionary has given a different slant to the meaning but if you carefully compare each one you will see that they all say the same thing. Using the four definitions above I have created my own definition.
AI models what is thought to be human intelligence within a computer program.
That sentence should be looked at more closely. Firstly we have to look at how it is not stating that a machine has intelligence it is purely modelling intelligence. Secondly it is specifically stated in ‘The American Heritage Science Dictionary’ and in my definition “what is thought to be human intelligence” this is important as we need to know what we mean by human intelligence so we know what is being modelled.
Human Intelligence
As explained by the website What is Intelligence, Intelligence is one of those words that has divided the scientific community for many years with no proper definition really being agreed on. It has definitions such as the general mental ability to learn and apply knowledge to manipulate your environment or the ability to reason and have abstract thought. The word itself comes from the Latin word “intellegere” which means “to understand”
Some scientists like Psychologist Howard Gardner even state that there are multiple intelligences. Each intelligence will have a specific area like logic, spatial or language. There is also the theory that there is even an emotional intelligence. Steven Sloman and Philip Fernbach talk about intelligence as communal in their book ‘The Knowledge Illusion’. It discusses how we actually gain intelligence and knowledge from our society and it is not really an individual asset. This is shown in our testing communities, our knowledge and intelligence is magnified by other people contributing, Sloman and Fernbach actually compare this to how bees operate in a hive.
I should also note that the way humans remember is different to how a computer remembers. Computers are great at storing huge amounts of data. Humans on the other hand are not we are great google machines that can reference things so we know how to get that data again but we are not so good at storing it. Sloman and Fernbach describe this in Chapter 3 of ‘The Knowledge Illusion’ and describe a condition called hyperthymesia which is where humans can remember things in detail just like a computer, they also state how this can turn into a burden for those who suffer the condition.
So clearly computers are very good at certain tasks but are not perfect. John Haugeland coined the term Good Old Fashioned AI (GOFAI). This was because the way we used to think of AI was as a seperation of the logical program or processing machine that was the AI from the machine it inhabited, coincidentally this is how the field of cognitive science started out thinking about intelligence which was thought to be separate from our bodies we inhabited. Later it was discovered through robotic research that this was not a good platform for thinking about AI as the thinking time it took to process everything was phenomenal even in powerful machines. In some circles it is thought that even with the ability of linking multiple machines together (creating an artificial hive) we would never be able to model human intelligence fully without incredible lag in the thinking time needed. With that in mind I think my definition of AI should be adjusted to the sentence below.
AI models a specific area of human intelligence within a computer program depending on the problem to be solved. .
Should we fear AI
We have a definition for AI, does it look like HAL? Definitely not. So what is all the fuss and fear about. I recently read an article on linkedin about how some medical clinicians were fearful that AI could replace them. This is the primary source of fearing AI it is the fear of being replaced. As testers could we really be replaced by a machine?
James Bach and Michael Bolton define testing in this blog post:
“Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.”
There is a numerous amount of activities going on while you are testing. You use your emotions, your knowledge, your learning ability and many more skills to be able to fulfil the testing role. Taking a look at just trying to model emotions this becomes very complex. The Conversation has a good article on AI modelling emotions. It shows several uses for AI being able to detect emotion but not to be able to feel emotion and that is a key area of testing how does the software make you feel. Emotions are affected by external influences like a bad night's sleep. To be able to program emotions or for AI to learn emotions it must be able to have this external influences like a bad night’s sleep or a bad user experience with a piece of software.
What about the skill of questioning could this be modelled in AI. The previously mentioned article from The Conversation mentions neural networking. Mike Talks introduces neural networking in a series of blogs starting with this one. It is a form of machine learning that tries to model how the brain works while learning. You can set your network to learning mode allow it to process the data then switch it into operation mode to act on the data it has learned. Machine Learning can fall into three categories:
- Supervised learning
- Unsupervised learning
- Reinforced learning
Neural networking is not the only algorithm that does this others like Bayesian networks, Decision Trees and K-Means clustering also exist. The questioning AI would have to learn the data it needs to be able to ask the questions and then go into operating mode to ask the questions. It forms patterns within the data it learns to be able to predict successfully the function it needs to do. In this case ask questions about the software.
What about the unknowns? As humans we also don’t know about the unknowns especially the unknown unknowns but we are extremely adaptable so we can react to the unknowns better. As humans we are also capable of asking questions out of nowhere. We can ask a question that may seem like a stupid question at the time (there is no such thing as a stupid question) can spark deeper thinking into the product and open up more questions. Asking stupid questions would have to learnt by the AI where would it get data from to be able to provoke a stupid question.
Just by looking at these two small areas of being a tester I have hopefully shown that we do not have to worry about being replaced. Testing is such a big area that AI especially under the GOFAI way of thinking would just not be able to cope with it all at once.
Cognitive QA
Cognitive QA is a term I first saw in the white paper written by Humayun Shaukat and Rik Marselis for Sogeti. It is basically describing testers working alongside AI to increase testing efficiency. It uses predictive analytics to be able to highlight risk areas of the project. Using AI to help you to test more efficiently.
I have only briefly covered cognitive QA as I plan to do a series of blogs on this one subject on my exploration and learnings.
AI and Checking
James Bach and Michael Bolton also define checking:
“Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.”
Looking at the definition for checking I see the perfect use for AI. It can help us to perform better and more informed checks. This is already being looked at as Joe Colantonio describes in this blog AI Test Automation. We all use tools to help us perform better checks and better testing. AI could and in some small areas has already started to become another useful tool in our repertoire.
But still….
Elon Musk wrote in a comment on Edge.org
“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most”
You may still not be convinced who knows what the future will bring. If AI does start doing certain tasks that testers traditionally did how do we know if it is being completed correctly? It is not just about AI doing the task correctly as stated in this article from IDG Connect about Gender biasThis describes how prejudices could be accidently or maliciously added to AI.
Nick Bilton, worte in the New York Times:
“The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”
Stephen Hawking also had the same view in this BBC article It will need professionals to be able to test that the correct result is being produced. It will need professionals to make sure that it can’t be abused, to think about security, performance, usability, accessibility etc. We may even have to start considering the morality of the project and if it could cause harm. Does this description ring a bell? A colleague said to me while talking about this issue we may become the watchers of the watchers.
AI is coming and we can’t bury our heads in the sand and try to ignore it. It will impact the shape of testing but it is something we should embrace. We should be forging the future of AI and testing practices. As a community we learn from each other about new ways of testing or thinking about testing, new automation tools etc. AI should also be talked about amongst the community, good uses, bad uses and lessons learned. As Jeff Hawkins said
“The key to artificial intelligence has always been the representation.”
Thank you for making it to the bottom of my blog I hope you enjoyed it please get in touch if you are interested in any of the subjects mentioned.