Some thoughts about the usefulness of AI
Following my last piece about what is happening in the AI space from a global – and somewhat geopolitical – perspective I would like drill down a bit into some of the specific questions about and the usefulness of AI. Also, why we might want to embrace it as it embeds itself in our lives.
For simplicity and to avoid “vocabulary and terminology jumble” let AI be defined as follows: AI is the simulation of human intelligence processes by machines (computers) by way of using algorithms. These processes include learning, via the acquisition of information (data) and rules for using it; reasoning, via rules to reach approximate or definite conclusions and; self- correction. The “processes” and “rules” express themselves through the inbuilt mathematically driven mechanisms embedded in the algorithms Some applications of AI include expert systems, speech recognition and machine vision.
Our senior data scientist at Calyps recently said in a presentation about Calyps’s AI solutions for hospitals “Hospitals should be excited about AI, there are so many amazing benefits it can bring”. He is right. Truly amazing and what is more, today we are only scratching the surface. So let us realize and embrace that because there is a lot of good that can come from AI. But we also need to be aware and informed and consciously steer things that way.
Is AI useful ?
Why is AI taking center stage in tech only now? What exactly can it do for us? Should we indeed embrace it or fear it? For many of us these questions and many others loom large for all sorts of reasons. Maybe because of lack of familiarity, natural skepticism about things new, fear of losing control, fear of privacy loss, abuse of data, because it is just crazy complex, because we are increasingly skeptical about the intentions of big tech etc etc. All understandable and fair questions.
And yet, we are a pretty adaptive species and I would say that there is more good than bad in all this. To make that point, a short story : in the late 1980s I was living in the Netherlands, working in a technical job. I considered myself to be technologically pretty adept. After all I had already bought the first Mac and felt I was on the cutting edge. Right? My then future wife, who was not a computer aficionado, said to me one day that they had just recently found viruses in computers and that they could spread, so we need to be careful. I laughed out loud and foolishly said “no my dear, that cannot be – you surely misunderstood”. The rest is history…
In the meantime we have learned to live with computer viruses, technology combats them so we can continue to do the great things we can and love on our computers. Hackers keep trying to wreak havoc and we fight back. And we continue to use our computers, plus all the other software driven devices we have, happily and without thinking of stopping. We have never stopped using them. Why? Because, for the overwhelming majority, the benefits outweigh the disadvantages, by far.
And with AI it will be exactly the same – it already is. Not so strange really since AI is a result of vast structures of code, of software. As are the many applications we use on our devices today. In AI we just use this software “algorithms”. One major difference is that we can instruct or enable algorithms to “learn” which simply means being able to process huge amounts of data, very quickly, see patterns in that data and draw conclusions based on historic outcomes coming from similar or other datasets. It is like a child learning a game: it tries many ways to play it (“learns”) until it “wins” or “completes” it with great regularity. In doing so, the child learns to make certain moves, follow certain paths, searches for patterns and makes many comparisons, which eventually leads to outcomes.
How did the child know a new outcome was good (or bad)? Because it had many earlier ones to compare with – and then created an internal ranking. Algorithms do exactly the same, they just use mathematical formulae and embedded deductions or decision junctions (a simple comparison is the “IF” function in an Excel spreadsheet), hierarchical structures, sorting mechanisms, various calculation methods and many other to do so.
go to part II
For more than 18 months now, CALYPS has been developing Artificial Intelligence specialized in optimizing the patient flows for hospitals, for both scheduled or unscheduled activities. This “AI adventure”, far from being another romantic overnight success, has been a grind fraught with pitfalls, both of a technical and human nature. This year we would like to share with you our experiences, our thoughts, as well as some of our questions.