74-year-old retired pastor Lester Isbill died after being restrained in a chair for over nine hours without water, food, or bathroom breaks, with a hood over his head. His autopsy was later amended to show death from heart disease complicated by dehydration and restraint, changing the manner of death to homicide.
The core failures here were fundamentally human judgment failures:

- Compassion and recognition failure - Video reportedly shows a nurse laughing while in the cell and another employee making an obscene gesture toward the camera. The problem wasn't lack of monitoring - it was lack of humanity and protocol violations - Instructions for the restraint chair: State detainees shouldn't be left in it more than two hours. Staff were checking on him periodically but chose not to provide water, medical care, or release him.Discretionary judgment - He was arrested for disorderly conduct, a low-level misdemeanor for which people usually are released on their own recognizance within hours.
- Documented the same deterioration
- Perhaps alerted supervisors more systematically
- But couldn't override human decisions, provide compassionate care, recognize a medical emergency requiring intervention, or exercise the discretion to say "this elderly, confused man with a pacemaker needs a hospital, not a jail cell."
A robot with proper medical sensors could have provided continuous vital sign monitoring - tracking his heart rate, blood pressure, body temperature, and hydration levels in real-time. Unlike periodic human checks, it could have generated automatic alerts when his condition deteriorated. He was restrained for over nine hours, and automated systems might have flagged the violation of the two-hour restraint chair protocol more insistently than human staff who chose to ignore it.
A robot might also have created an unambiguous, timestamped record of every interaction (or lack thereof) - making it harder to downplay or obscure what happened.
But the fundamental problem remains:
The staff knew what was happening. Video shows personnel in his cell, with one nurse reportedly laughing and another employee making an obscene gesture. They were aware he'd been restrained for hours without water or bathroom access. The issue wasn't a lack of information - it was the decision not to act on that information.
Even if a robot had sent alerts saying "detainee vital signs deteriorating," "nine hours in restraint chair exceeds protocol," or "severe dehydration risk" - those alerts still go to the same human staff who were already ignoring his condition. If people are willing to ignore an elderly man's obvious distress and medical needs, they can silence or override robotic alerts too.
This tragedy required either (1) someone with authority to intervene despite staff indifference, or (2) real-time external oversight with enforcement power. Technology alone can't create accountability when the humans in charge choose cruelty over care.
- Laughed at a dying man
- Made obscene gestures at cameras
- Participated in the cruel mockery that characterized this case
- Brought personal biases, frustration, or callousness into the situation
In that sense, yes - robotic guards might prevent some forms of active cruelty and abuse. They wouldn't escalate situations out of anger, retaliate against difficult inmates, or engage in the kind of dehumanizing behavior captured on that video.
However, the trade-off is crucial:
While a robot wouldn't mock Pastor Isbill, it also wouldn't have the capacity that a good human guard would have - to recognize "this man is seriously ill and needs help NOW" and override standard procedures to get him emergency medical attention. It wouldn't exercise compassion or give him water even if not explicitly instructed to do so.
The real tragedy is that we're weighing "at least robots won't be actively cruel" against "but humans could show mercy." The solution shouldn't be choosing between cruelty and indifference - it should be accountability systems that ensure guards act with basic human decency.