Even more thoughts about the usefulness of AI
Ok, so now you might ask why we use AI to solve problems (generate outcomes) if we humans can already do that in our brains? There is a not-so-small problem with that….
Human capacity to deal with vast volumes of data is limited. Apparently we only use around 10% of our brain’s capacity! The remaining 90% could surely be used to handle big data? But no, it is a strange quirk of nature that seemingly prevents us from going well beyond the 10% threshold. Bizarre, but I will let other scientists or Wiki explain that. Yes, we are limited in our ability to store and remember which often leads us to making decisions based on very little information (data) or historic outcome comparisons. How often do we not make decisions that way, even if it can be validly argued that instinct and intuition themselves are stored history triggering thought reflexes triggered within us?
Instinct and intuition, while always present and influenced by history, have a way of being diminished by the most recent experiences or outcomes we have. As well as by inputs, pressure, dissuasions of others and so forth. And we cannot mentally deal with too many variables. We also have built in preferences so we always apply bias in some form. It has been demonstrated that, as humans we have an inclination to use so-called “strong” factors in our internal weightings as we make decisions. That is to say ones we remember, maybe prefer or ones which indeed typical impact heavily on an issues at hand. But what is often underestimated is the collective impact (and thus, importance) of the many “weak” factors that lurk in the background and which are often ignored. Because we simply cannot recall them? Because there are too many of them and we cannot sort them in our minds? Or maybe because we don’t like them? Or because we already have preferred outcomes in our minds which we do not want to endanger (this is a very human habit called “motivated reasoning”)? Probably a mix of all. AI is able to pull in all factors, weak and strong and generate fact-based, reality reflecting, unbiased results/outcomes.
Is AI able to match humans ?
So thanks to technology, AI is able to work with vastly more information than our human brains can. And – unless deliberately programmed to – without bias. AI through its use of data and historic outcomes, through endless combinations and permutations of data and learning from it, generates unbiased reflections of reality based on the data it works with. And the more data it can use, the better or more accurate (e.g. for predictions) the outcomes. Having data, lots of it, is key…
For the less involved this may all seem a bit “out there” but there is already more than ample proof of AI working highly effectively, with the help of very clever algorithms. And they are getting better all the time. There are even algorithms that “check” other algorithms, for robustness, for being benign, efficiency or may other purposes. So what are examples of AI-for-good ? There are many such as ones optimizing logistics, traffic flows, weather forecasts, scheduling of almost any kind etc. The scope for efficiency, accuracy and economic improvements with the help of AI is huge.
We at Calyps are developing AI driven solutions for planning & scheduling in of patient flows hospitals. From the emergency ward to predicting length of patient stays to unplanned event situations and to all the related peripheral issues.
The people who work in hospitals do is extremely important, highly useful and highly demanding work. They work with, for and on other humans. Probably one of the most stressful types of work. Decisions can have big consequences, poor decisions fatal ones even. Hospital staff need to be alert, fast, quick thinkers and also be able to slow down when interacting with patients and treat patient data with great care. Not to mention drawing correct conclusions from it, preferably the best conclusions 100% of the time. That can be very stressful and can weigh heavily. No room then for dealing with organizational matters and admin that could be done effectively by other means. No room for firefighting, to many corrections of too many (often cumulative) errors which eat up time, resources and rare skills that are needed elsewhere. Humans should really be able to do what they do best and what and algorithms cannot do (at least not yet). Like interacting with patients, comforting a sick child or conducting an operation.
It is in the emotional, empathy, human-to-human (including e.g., surgical operations) space where AI cannot match humans – at least not yet. AI comes into play very usefully and effectively where big data is concerned and extracting the most of it. And in so doing, relieving humans of the burdens of processing what is too much for their brains in (quasi) real time, thereby reducing poor decision making. We at Calyps want to help out in this arena, extract the value of that data in order to take a way some of the pressure on hospital staff, be it stress or economic. This will require organizational changes and mindset changes and these changes will certainly be a challenge for some. But if we step back and think it through, it is hard not see that, ultimately, all stakeholders can benefit. This is a journey we at Calyps started 2 years ago evolving from our roots in business intelligence and every day convinces us that it is here to stay.
For more than 18 months now, CALYPS has been developing Artificial Intelligence specialized in optimizing the patient flows for hospitals, for both scheduled or unscheduled activities. This “AI adventure”, far from being another romantic overnight success, has been a grind fraught with pitfalls, both of a technical and human nature. This year we would like to share with you our experiences, our thoughts, as well as some of our questions.