Return of the Jed-AI: January 2025 Embroker Newsletter
AI is changing everything. Check out Embroker's January 2025 newsletter on how you can make the most of AI while protecting your business from inherent risks.
Protect your business today!
Get a QuoteAnother year, another AI platform making headlines.
Admittedly, we had to do a double-take when we saw news of DeepSeek come out — we initially thought we were reading about the deep freeze temps that hit the southern states this month. Many of us probably didn’t want to start the new year with deep freezes or DeepSeek, but here we are.
Keeping track of the whirlwind developments in AI can sometimes feel like trying to chase a squirrel on caffeine. We totally get how overwhelming it can be.
But there’s no denying that AI has some pretty exciting perks for businesses, like cost savings, boosting productivity, and better efficiencies — when implemented correctly. That’s a key distinction because, on the flip side, AI can bring ample challenges when not used responsibly.
Since it’s a new year filled with new possibilities, priorities, and AI platforms, we thought it the perfect time to look into what professional services firms need to know about AI, the risks, and insurance.
So take a break from shoveling snow and get ready to dive into all things AI.
Let’s get into it.
- What’s going on?
- Managing the risks of AI
- AI, insurance, and governance
- What’s new from Embroker
Subscribe for insurance and industry tips, tricks, and more
What’s going on?
Why DeepSeek Shouldn’t Have Been a Surprise — Harvard Business Review
There have been headlines aplenty about the shock of DeepSeek. But is it really such an unexpected development? As this article points out, management theory could likely have predicted DeepSeek — and it can also offer insight into what may happen next.
Public DeepSeek AI database exposes API keys and other user data — ZDNet
No surprise with this one. As soon as news about DeepSeek came out, it was a given that there would be security concerns.
AI’s Power to Replace Workers Faces New Scrutiny, Starting in NY — Bloomberg Law News
This should be on every business owner’s radar. While New York might be the first state to use its Worker Adjustment and Retraining Notification (WARN) Act to require employers to disclose mass layoffs related to AI adoption, it won’t be the only one.
How Thomson Reuters and Anthropic built an AI that lawyers actually trust — VentureBeat
A new AI platform might be the answer to lawyers’ and tax professionals’ AI dreams. This article has everything you need to know about “one of the largest AI rollouts in the legal industry.”
Managing the risks of AI
“If your company uses AI to produce content, make decisions, or influence the lives of others, it’s likely you will be liable for whatever it does — especially when it makes a mistake.”
That line is from a The Wall Street Journal article and is a fair warning to all businesses using AI.
It’s no secret that every new technology comes with risk. The shortcomings of AI have become well-documented, notably for hallucinations (a.k.a. making stuff up), copyright infringement, and data privacy and security concerns. The terms of service for OpenAI, the developer of ChatGPT, even acknowledge accuracy problems:
“Given the probabilistic nature of machine learning, use of our Services may, in some situations, result in Output that does not accurately reflect real people, places, or facts […] You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing output from the Services.”
Of course, not everyone reads the terms of service. (Who hasn’t scrolled to the end of a software update agreement and clicked accept without reading?) And taking what AI produces at face value is the crux of the problem for many companies using the technology.
An article from IBM notes, “While organizations are chasing AI’s benefits […] they do not always tackle its potential risks, such as privacy concerns, security threats, and ethical and legal issues.”
An example is a lawyer in Canada who allegedly submitted false case law that was fabricated by ChatGPT. When reviewing the submissions, the opposing counsel discovered that some of the cited cases didn’t exist. The Canadian lawyer was sued by the opposing lawyers for special costs for the time they wasted sorting out the false briefs.
Lawyers, financial professionals, and others offering professional services could also find themselves in serious legal hot water if their clients sue for malpractice or mistakes related to their AI use.
So, how can companies make the most of AI while protecting themselves from inherent risks? By making proactive risk management their company’s BFF. That includes:
- Assessing AI practices, including how AI is used and understanding the associated risks.
- Creating guidelines for using AI, including how information should be vetted.
- Establishing a culture of risk awareness within the company.
- Training employees on AI best practices.
- Updating company policies to incorporate AI usage, guidelines, approvals, limitations, copyright issues, etc.
- Getting insured (a bit more on that in a moment).
- Don’t forget about it. Things move fast with AI, so staying on top of new developments, security concerns, and regulations is crucial.
The bottom line: When it comes to AI, risk management isn’t just a good idea — it’s essential.
(P.S. The National Institute of Standards and Technology has developed great (and free) documents to help organizations assess AI-related risks: Artificial Intelligence Risk Management Framework and the Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile.)
AI, insurance, and governance
Alright, after all that doom and gloom about the perils of AI, let’s talk a little insurance. While there are risks associated with AI, let’s face it, businesses that shy away from it are likely to be left in the dust. That’s why safeguarding your company is key to harnessing the opportunities that AI has to offer.
A core aspect of risk management for AI is having the appropriate insurance coverage to provide a financial and legal safety net for claims stemming from AI-related use:
- Professional liability insurance: This is your must-have coverage for claims alleging mistakes or negligence caused by AI systems — or humans.
- Cyber liability insurance: Provides protection from damages and liabilities due to cybersecurity incidents caused by AI use.
- Directors and officers (D&O) insurance: This coverage protects the personal assets of a company’s leadership team from decision-making liabilities, including AI initiatives.
- Employment practices liability insurance (EPLI): This type of policy covers companies against claims made by employees and applicants alleging discrimination or bias caused by AI systems.
Once you’ve got insurance coverage to deal with potential AI conundrums, it’s important to regularly review and update your policies to address new developments, concerns, and regulations to ensure your company stays protected in the wake of potential new risks. And if you’re unsure, instead of playing a guessing game about how to protect your company from AI risks, chat with your insurance providers. Think of them as your trusty strategic business partner for addressing AI (and other) risks.
Since we’ve shone a light on the potential AI risks your company could run into, you might be wondering what the insurance industry is cooking up to tackle its own AI woes. (Spoiler alert: We’re not just crossing our fingers and hoping for the best!)
The good news is that the insurance industry is actively stepping up to tackle challenges and taking charge of responsible AI use. The National Association of Insurance Commissioners (NAIC) issued a model bulletin regarding insurer accountability for third-party AI systems. The bulletin outlines expectations for the governance of AI systems pertaining to fairness, accountability and transparency, risk management, and internal controls.
Additionally, many states have introduced regulations requiring insurance companies to disclose the use of AI in decision-making processes and provide evidence that their systems are free from bias. Plus, insurers are developing methodologies to detect and prevent unwanted discrimination, prejudice, and lack of fairness in their systems.
It’s also worth mentioning that the effect of AI-related risks in the insurance industry is a bit of a different ball game compared to other sectors. “Importantly, the reversible nature of AI decisions in insurance means that the associated risks differ significantly from those in other domains,” reads a research summary from The Geneva Association.
In even better news, AI is offering substantial opportunities for insurance providers to make more accurate risk assessments, along with improving availability, affordability, and personalization of policies to reduce coverage gaps and enhance the customer experience.
Those are wins all around for everyone.
What’s new from Embroker?
Upcoming events, stories, and more
AI might be transforming tech, but is it creating new risks as equally as it’s creating opportunities? Our Tech Risk Index report reveals how AI adoption fuels optimism while also raising concerns for privacy and security. Notably, among 200 surveyed tech companies, 79% are hesitant to use AI internally due to risks.
We are bringing together insurance rigor and advanced technologies: Embroker CEO
Our CEO, Ben Jennings, was interviewed for The Insurtech Leadership Podcast at Insurtech Connect 2024. In the interview, Ben shares his perspectives on the insurance industry, the balance between technological innovation and insurance expertise for enhancing the customer experience, and how Embroker is leading the Insurtech 2.0 movement.
The future of risk assessment: How technology is transforming risk management
Check out our latest blog to learn how AI and other cutting-edge technologies are reshaping risk assessment for businesses.