The Think Blog

4 Key priorities for smart AI implementation in regulated industries

人工智能图标叠加在一个受监管行业组织的董事会里博彩网站评级的人的图像上

人工智能,特别是生成式人工智能,都是组织领导的首要考虑因素. The fear of falling behind is strong, 很少有企业的首席信息官不承受巨大的部署压力. 对于任何组织来说,在采用任何类型的人工智能之前,制定一个深思熟虑的策略来降低风险并最大化业务价值,这是一个很好的建议, highly-regulated industries like pharmaceuticals, healthcare, and financial services must be especially careful. After all, 人工智能不会在如何使用数据以及与消费者沟通方面违反法律法规.

That said, though navigating this regulated landscape can be tricky, it’s worth it for the value it can bring. However, to realize this value, 整个领导团队必须制定一个详细的计划,以确保他们在不疏远客户和员工、不陷入监管麻烦的情况下实现目标.

Alright, so how do you get started? In this blog post, 我们将讨论受监管行业在实施传统人工智能和生成人工智能方面面临的挑战,并提供该领域公司制定人工智能战略时需要考虑的四个优先事项.

Start with a focus on customer and employee trust 

If the people who will interact with AI—patients, customers, doctors, brokers—don’t trust it, they won’t use it. 如果你部署的AI没有被使用,你就浪费了大量的时间、资源和金钱.

Employ change management

For customers and patients, strong change management is critical. For example, even in the 2020s, 一些医院系统仍在使用纸质病历,而不是电子病历(EMR)。, and even in the systems that have made this transition, a small but significant number of physicians decided to retire rather than use EMR. A digital assistant is a much bigger leap for a physician to make. 将人工智能融入日常博彩网站评级是一个巨大的变化,需要建立信任才能取得成功.

Create a thoughtful communication plan

Introduce AI gradually and ensure that there is a thoughtful, 全面的营销和销售计划,以传达您的组织如何以及为什么提供人工智能功能. Make sure to provide a human alternative, 尤其是在客户服务或与医疗保健提供者交谈等活动中,这些活动以前需要人工参与. 客户和患者需要一段时间才能适应与人工智能的互动. 

Demystify how AI and machine learning work

对于员工来说,了解人工智能和机器学习(ML)的博彩网站评级原理非常重要. AI is often viewed—and in some cases actually is—a black box where data goes in, results come out, and no one has any idea how the algorithm reached its conclusions. 尽可能采用透明的人工智能,使最终用户能够遵循其逻辑. When this is not possible, 举办研讨会,对已知结果的情况进行人工智能分析,以证明准确性并建立信心.

Establish a training program and provide helpful information

Just as with customers, introduce AI gradually and, at each step, ensure everyone is well-informed and well-trained. 来自组织各个层面的沟通——尤其是来自高管层的沟通——必须清楚而频繁地解释人工智能将帮助实现的目标和好处. 

Be clear that AI will support employees—not replace them

Additionally, 不要用人工智能来取代人类——这既不道德又无效——而是要帮助人类. 如果人们认为他们会因为人工智能而失去博彩网站评级,士气就会暴跌,信任就会不复存在.

Experiment with risk reduction tactics

AI and generative AI, especially, present risks to highly regulated industries. In pharmaceuticals and healthcare, regulations very strictly govern the claims, language, and even layout with which pharma companies can communicate about their products. HIPAA and other regulations protect the privacy of patient information, which can place limitations on how healthcare data may be used to train LLMs. 不要忘记,生成式人工智能有“幻觉”的倾向——如果它给客户或病人提供错误的信息, people could be harmed or even die. 

Thinking about AI and your digital products?

Our team of experts can help.

金融服务业的风险也很明显,即使生命和肢体可能没有受到威胁. 有偏见或产生幻觉的人工智能可能会做出糟糕的预测或提供不准确的信息, 导致员工做出错误的投资决定,使公司损失数百万美元,或者导致客户损失金钱和金融稳定. 

To protect against these scenarios, organizations must:

  • 在部署之前和正在进行的基础上建立严格的测试协议. 
  • 确保建立坚固的护栏,以确保结果是准确的,并在相关法规的范围内. 
  • 避免让人工智能在没有人为干预的情况下自行运行,除非风险非常低. AI should assist human beings, who ultimately make the final decision.

Look for privacy-friendly alternatives

Beyond health information, 个人识别信息(PII)受一系列州和国家法规的管辖, including the California Consumer Protection Act and GDPR. Using PII and health data to train AI or, even worse, having PII turn up in responses could expose the organization to severe fines. 

Anonymize all data from individuals before using it in training, and only use the data you need to obtain the desired result. Some organizations use synthetic data for training. 合成数据是由一种算法创建的,这样人工集就可以复制原始集内的关系. In this way, organizations eliminate the problem of PII and health information, but extensive testing is required to ensure results from such training are valid.

Keep ethics in AI front-and-center

An AI is only as good as the data on which it is trained, and data can contain hidden biases, which create biases in the AI. 

For example, 美国医院和保险公司用来识别需要“高风险护理管理”的患者的医疗预测人工智能被发现存在问题 significantly underestimate black patients’ level of severity when ill. 一项调查发现,该算法将医疗支出作为患者医疗需求的关键指标, 生病的黑人病人和健康的白人病人的花费大致相同, which led the algorithm to underestimate their healthcare needs.  

In the financial services industry, instances of AI bias have been more difficult to prove, but it’s clear that bias is pervasive within the industry, 它经常使用预测和人工智能算法来协助抵押贷款审批. 由The Markup进行并由美联社发布的一项研究发现, when compared to similar white applicants, lenders were much more likely to reject people of color. Black applicants were 80% more likely to be rejected, Native Americans 70% more likely, and Latinos 40% more likely. 

有偏见的人工智能将产生不准确的结果,并可能使组织承担责任. Employ a data scientist to ensure the data is as unbiased as possible. Rigorously test for biases before deployment and constantly during production, adjusting the model as needed.

Be intentional with your AI strategy

Deploying AI in a highly-regulated industry is tricky, and requires careful research and planning; change management that comes from the very top of the organization; strong guardrails to ensure accuracy and prevent hallucination; and extensive testing to ensure that the data and results aren’t biased or inaccurate. It’s not a minor effort, and the entire organization will need to get behind it, but in the end, it’s worth the investment to achieve the benefits of AI.

Is your organization standing up an AI strategy? 我们的专家团队了解受监管行业中合规和合规的复杂性we can help.


Stay in the know

Receive blog posts, useful tools, and company updates straight to your inbox.

We keep it brief, make it easy to unsubscribe, and never share your information.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Send us a postcard, drop us a line

Interested in working with us?

我们对项目进行界定并组建团队,以满足您组织的独特设计和开发需求. Tell us about your project today to start the conversation.

Learn More