
The World Health Organization has called for the use of large-scale language modeling tools created by artificial intelligence (AI) to protect human safety, human security and autonomy, and public health.
LLMs include the fastest growing platforms to understand, manage and create people like ChatGPT, Bard, Burt and many others. Their meteoric spread and growing experimental use for health-related purposes is generating much excitement around their potential to support people’s health needs.
It is important to consider the risks carefully when using LLMs to improve access to health information, as a decision-support tool, or to increase diagnostic capacity in under-resourced settings to protect human health and reduce inequities.
While the WHO is enthusiastic about the appropriate use of technologies, including LML, to support health care professionals, patients, researchers and scientists, the usual precautions for any new technology include LML. This includes broadly adhering to the key values of transparency, inclusion, public participation, expert oversight and rigorous evaluation.
Rapid adoption of untested systems can lead to errors by healthcare workers, harm patients, undermine trust in AI, and thereby undermine (or delay) the long-term benefits and uses of technology in the world.
Concerns that require strict controls to ensure that the technologies are used in safe, effective and ethical ways include:
- The data used to train the AI may be biased, misleading or generate inaccurate information that may pose a risk to health, equity and inclusion.
- LLMs generate responses that may seem authoritative and persuasive to the end user. However, these responses may be completely wrong or contain serious errors, especially for health-related responses;
- LLMs trained on information that consent may not have previously been granted for such services, and LLMs cannot protect confidential information (including health information) that a user provides to the Application in response.
- LLMs can be misused to generate and disseminate highly persuasive information through written, audio, or video content that is difficult for the public to distinguish from legitimate health content. And
- While committed to using new technologies, including AI and digital health, to improve human health, the WHO advises policymakers to ensure patient safety and protection as technology companies work to commercialize LLMs.
WHO calls for these concerns to be addressed and clear evidence of benefits to be measured before widespread use in routine health care and treatment – by individuals, care providers or health system managers and policy makers.
WHO reiterates the importance of applying ethical principles and appropriate governance when designing, developing and deploying AI for health, as outlined in the AI for Health Ethics and Governance Guidelines. The 6 core principles identified by the World Health Organization are: (1) maintaining autonomy; (2) promotes human welfare, human welfare and the public good; (3) ensure clarity, transparency and understanding; (4) promotes responsibility and accountability; (5) ensuring inclusion and equality; (6) Introduce responsive and continuous AI.
We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences