Tag: artificial intelligence

  • AI May Soon Be Trained To Diagnose Mental Illness

    AI May Soon Be Trained To Diagnose Mental Illness

    Some scientists believe that AI-diagnosed mental illness will be a reality in the space of years, not decades.

    Scientists in multiple fields of psychology are actively gathering data and undergoing testing in an effort to teach artificial intelligence programs to diagnose mental illness in humans. This is according to a report in The Verge written by B. David Zarley, who himself has borderline personality disorder, as part of its Real World AI issue.

    Zarley met with multiple scientists who are each taking their own approach to machine learning in the service of finding a better way to diagnose psychological disorders.

    The current model, based on referring to the DSM to guide psychiatrists to make diagnoses around a patient’s self-reported symptoms, is inherently biased and considered by many in the field of psychology to be flawed. The current director of the National Institute of Mental Health (NIMH), Dr. Joshua Gordon, feels that way himself.

    “We have to acknowledge in psychiatry that our current methods of diagnosis—based upon the DSM—our current methods of diagnosis are unsatisfactory anyway,” Gordon told Zarley in an interview.

    Diagnosing people based on purely physical data is not yet within reach the way that diagnosing people with physical illness is. With advances in computer science, however, it is finally possible to train AI software to compile data and recognize patterns in a way that a human brain simply could not handle.

    “Machine learning is crucial to getting [Psychologist Pearl Chiu’s] work out of the lab and to the patients they are meant to help,” Zarley writes. “‘We have too much data, and we haven’t been able to find these patterns’ without the algorithms, Chiu says. Humans can’t sort through this much data—but computers can.”

    Additionally, scientists envision using MRI technology to help discover the root of certain mental illnesses or their symptoms and even treat them by allowing patients to directly see the results of their thoughts and better understand how their brains function.

    “[Research coordinator Whitney] Allen was asked to project her brain into the future, or focus on the immediate present, in an attempt to help find out what goes on under the hood when thinking about instant or delayed gratification, knowledge which could then be used to help rehabilitate people who cannot seem to forgo the instant hit, like addicts.”

    Many of the scientists Zarley spoke with believe that AI-diagnosed mental illness will be a reality in the space of years, not decades. However, there are both practical and ethical concerns to be considered.

    AI built and taught by humans, who are biased, cannot help but be biased itself. Zarley points out that “different cultures think of certain colors or numbers differently.” Data for the AI program also must be collected from human samples, and that is much easier done from a developed nation in an area with a university. That leaves entire populations from poorer nations and even rural populations in the U.S. largely out of the picture.

    There are also numerous ethical concerns any time the idea of artificial intelligence is raised. In their paper The Ethics of Artificial Intelligence, Nick Bostrom of the Future of Humanity Institute and Eliezer Yudkowsky of the Machine Intelligence Research Institute address multiple concerns. 

    “Responsibility, transparency, auditability, incorruptibility, predictability, and a tendency to not make innocent victims scream with helpless frustration: all criteria that apply to humans performing social functions; all criteria that must be considered in an algorithm intended to replace human judgment of social functions; all criteria that may not appear in a journal of machine learning considering how an algorithm scales up to more computers.”

    Regardless, AI is on its way, and the scientists Zarley interviewed are optimistic about future results.

    View the original article at thefix.com

  • Artificial Intelligence System Aims To Identify Drug Thefts In Hospitals

    Artificial Intelligence System Aims To Identify Drug Thefts In Hospitals

    The technology is meant to be used as a tool to help administrators monitor employees and alert them to anything unusual. 

    A new artificial intelligence system will monitor hospital workers and assign them a score that indicates how likely they are to steal prescription drugs from their workplace. The technology will address the growing issue of healthcare workers diverting drugs from their place of employment. 

    “The technology calculates how unusual one’s behavior is versus peers in their department, as well as peers across other hospitals, and analyzes a number of underlying metrics and patterns to create an overall risk score,” said Kevin MacDonald, CEO of Kit Check, which developed the system. 

    Kit Check develops software for prescription drug management, and works with about 400 hospitals and other healthcare clients throughout the U.S. and Canada. The new system will assign employees an Individual Risk Identification Score (IRIS). This is calculated by looking at data from drug dispensing cabinets, electronic medical records and drug disposal records.

    “The IRIS dashboard then shows who has the most risk in ranked order so hospital personnel can focus on people who are showing risky patterns,” MacDonald said. “The technology allows an administrator to look at why a person is scored as unusually risky and shows the specific transactions that contributed to the risk score.”

    The technology is meant to be used as a tool to help administrators monitor employees and alert them to anything unusual. 

    “A person’s score can change over time, and it’s not a 100% certainty that a high score means a staff member is diverting medications,” MacDonald said. “There will be situations where a person’s patterns shifted in an unusual—but explainable—way, for example, temporarily getting assigned to a different department/pattern. IRIS allows hospital personnel to have that conversation, evaluate the available data, and move on to other staff members that represent high risk.”

    A Utah hospital reported that up to 4,800 patients may have been exposed to hepatitis C in 2015 through a nurse who diverted medications by swapping needles with narcotics for needles containing saline. Healthcare workers who steal medications is a growing problem, according to some healthcare professionals. 

    “I think we’re all trying to figure this out,” said Angela Dunn, a medical epidemiologist with the U.S. Centers for Disease Control and Prevention (CDC).

    Scott Byington, president of the Utah chapter of the National Association of Drug Diversion Investigators, said that diversions from hospitals are likely to go unreported. 

    “A lot of the clinics or hospitals, when they catch employees doing theft, I would say more go unreported than reported,” he said. “All of a sudden somebody doesn’t show up for work and the rumor mill starts going. They’ll report it to us anonymously, usually, and when we go to investigate, (Human Resources employees) sometimes will just say, ‘We’re not going to release any information from that.’”

    Christine Nefcy, chief medical officer at McKay-Dee Hospital in Utah where the hep-C exposures occurred, said drug abuse is “rampant in communities across our country. Hospital personnel, hospital employees aren’t any different.”

    View the original article at thefix.com