If you live in London you will have seen the adverts on the underground for Babylon’s online GP service, ‘GP at Hand’. It offers what every patient (and government health minister) wants, instant access to a GP. Health Secretary Matt Hancock is an enthusiastic customer along with 40,000 other customers. All you have to do is to pick up your mobile phone, dial Babylon’s number, answer a number of simple questions from a chatbot, and a cheerful, smiling doctor will appear on your computer screen or mobile phone.
Readers of the Salisbury Review may recoil. How nasty can the world get when old Dr Reaction who you have been going to for years, is replaced by a hand waving young metropolitan, ‘reaching out’ to ‘share’ details of your chronic constipation? What about problems ‘down there’? Surely you will not have to take photographs with your iPhone?
While it is true there are limitations to this type of consultation – lack of a physical examination, undue reliance on what the patient is prepared to reveal about his history, or may have forgotten, and the ability of the doctor to get the patient to reveal them – 70 per cent of diagnoses can be made on the history alone so most patients do not require extensive tests, if any.
Surely this chatbot thing cannot compare with the accumulated knowledge of centuries of medical practice? Not so say the machine intelligence experts. Babylon’s doctors are aided by a computer which the owners of the company claim outperforms human candidates in a difficult postgraduate exam GPs sit if they want to practice independently. The average pass mark for human doctors who had trained for 12 years was 72 per cent over 5 years; Babylon’s AI (Artificial Intelligence), which is 2 years old, scored 81 per cent at its first attempt.
The President of the Royal College of GPs, faced with a doctors’ version of the 1980s Wapping Printers dispute when the old hot lead printing presses were forced to give way to Microsoft Word, poured scorn on this claim, saying that the questions Babylon set its computer were intended for revision-only purposes and did not match those of the actual exam.
But when Stamford University and the Royal College of GPs tested one hundred clinical ‘vignettes’ on twelve experienced GPs, none with any connection to Babylon, and the patients were played by practising GPs (some of whom did work for Babylon) human doctors correctly diagnosed between 64 and 80 per cent of questions; Babylon got 98 per cent. When triage decisions (who to send to hospital and who not) were compared; humans had only a slight edge over Babylon; doctors 92 per cent, Babylon’s 90 per cent.
In terms of computing power and machine intelligence there is nothing very ‘clever’ about Babylon. It uses a tree-like structure to process and score answers similar to systems already used by a number of US automated counselling services and other types of non-medical triage systems such as, ‘Why will my car not start?’
This leaves Babylon, and programs like it, with several advantages over conventional general practice. The first is accessibility. You can ‘see’ a GP any time of the night or day. The second is cost. While Babylon requires physical offices to which patients can go if they need face to face consultations or tests, they can be concentrated in large centres that take advantage of the economies of scale. One centre could replace a dozen conventional surgeries. A doctor’s home visit will become rare. Woken with pain in your chest? Hold your iPhone to your chest so a computer can read your ECG online. Spit on a special piece of litmus paper to measure chemicals that indicate damage to the heart muscle. Three minutes later an ambulance is on its way. It may be that within ten years the streets are so packed with people thanks to recent successes in reversing old age that the ambulance cannot get through, but that is another matter.
The service will get cheaper. There is nothing to stop digital doctor systems from employing doctors from low wage countries who can pass the PLAB (Professional and Linguistic Assessment Board test for foreign doctors) who will not have to come here to practise. They can do it from their bedrooms. Babylon’s directors talk of their system being one day on sale to the rest of the world under the NHS brand.
Could AI dispense with doctors? Will machines ever be sufficiently advanced to digest an infinite number of clinical encounters, correlate and score them for relevance – add in the entire corpus of medical and biological knowledge and produce lightning answers to anything asked?
The most advanced form of self-teaching machine so far is the chess-playing computer program Alpha Zero. Given the basic rules of chess, it turned itself into an all-conquering chess master (in twenty-four hours) by playing itself over and over again, reinforcing its previous knowledge at each step.
Starting from random play, and given no domain knowledge except the game rules, Alpha Zero achieved within 24 hours a superhuman level of play in the games of chess and shogi [a similar Japanese board game] as well as Go, and convincingly defeated a world-champion program in each case.
Denis Hassabis, founder of Deep Mind and the CEO of the computer company that developed Alpha Zero. claimed that ‘It does not play like a human or like a computer.’ Pointing out that it makes crazy sacrifices of high value pieces to lure its opponent into a positional trap, he remarked, ‘It plays in a third, almost alien way.’ Other experts however maintain that as a great deal of human thought went into the building Alpha Zero, one should be careful of attributing superhuman powers to a human creation.
Nevertheless, medicine is very like chess. During a consultation a human doctor ‘sees’ a type of medical chessboard in her mind’s eye, with its defined moves, counter moves and resulting ‘play’. Moves include picking up the phone to the hospital, ordering a test, or reassuring the patient and sending him on his way.
Whether machines will ever become conscious does not matter, at least for now. We have no evidence to suggest Alpha Zero is conscious and there is no reason why it would have to be. Nor are we, without a physical model of consciousness, in a position to judge. All we can rely on is the Turing Test. If one cannot distinguish between a human and a machine then we have to assume the latter is conscious and act accordingly.
Comforting yourself with ‘Not in my time?’ Consider the iPhone. Preceded by the Bakelite receiver, then by shoebox-sized portable phones, it was released in 2007 and now guides you around the streets, plays Bach, Beethoven, Dixie, Soul and Rap, translates all foreign languages, recites poetry, summons taxis, delivers your post, pays your bills, shops, and orders your next meal; it can even find you a lover – oh, and you can even consult a doctor on it.
Will patients accept machine doctors? I, like many doctors, was extremely sceptical about the introduction of telephone counselling. Patients would not trust a stranger on the other end of a telephone with their most private thoughts. Quite the contrary, most patients find not having to face a flesh and blood human being reassuring, which is why the service has proved very popular, and as effective, if not more so, than a physical meeting with a therapist.
This is born out by the experience of Relate UK which last year provided 15,000 sessions of live chat online – a demand it expects will double in the next two years. Similar programs in the US such as Woebot and Tess, which offer online behavioural counselling sessions, report a similar success.
For those patients who need to see the doctor’s face bio-engineering laboratories are racing to construct simulated humans, ‘avatars’ who look and behave as idealised doctors. The University College of Los Angeles at San Diego is developing machines who mimic ‘inflections of the voice that doctors use, that are empathetic, or that are judged as having a lot of empathy for their patients by themselves, but also by their patients, and also by naive observers’.
In the film Ex Machina, a young man is exposed in an experiment to a highly attractive female robot called Eva, whom, even though he knows she is a machine, he falls in love with. She is able to persuade him to take her side in an escape when she tells him she is a prisoner of her inventor who plans to ‘upgrade’ her, switch her off and use her parts to make a better robot. As soon as Eva is free, she locks him in her electronic cage to die of thirst, escapes to the outside world and vanishes.
If machine doctors are as perfect in their clinical skills as Eva in imitating love, and patients prefer them even though they know they are machines, will we need human doctors? The human doctor and her patient have a common bond – they both know they are going to die; machines do not die, they upgrade. Which is why however hopeless a case may seem a human doctor is driven by common humanity to do everything he or she can to save the patient. Machines are not human, they do not share our mortality, and are therefore not sympathetic to such irrational considerations. Given the considerable computing power they will develop in the future, they may develop an agenda of their own in which we play no part – perhaps in a language to which we will never have access.
Myles Harris is our Editor
This article appeared in the spring edition of the Salisbury Review. the next edition will be on June 1st 2019 . Subscribe