Continuous Workforce Screening

Home >ㅤResourcesㅤ>Beyond the Hire: PostHire Insights

Artificial Intelligence: The New Weapon of Insider Threats

Do you know how artificial intelligence affects you and your organization’s operation? The buzz around AI touts its unlimited applications and inspires animated discussions of its usefulness as a positive force in all businesses, including healthcare, GIG, transportation, education, finance, and security. 

However, it is imperative that every organization acknowledges and takes seriously the potential harm that can be caused by insiders who misuse AI as a weapon for personal gain or to settle scores. This must be a top priority for corporate security officers, human resource managers, and operations directors to mitigate the risks associated with such actions. 

In the hands of a motivated insider with only average technical proficiency, AI becomes a uniquely effective tool with which to penetrate an organization’s complete security infrastructure for any number of malicious purposes. While trusted people within your organization are its strongest resource, they are also its greatest vulnerability. In 2024, every corporate entity, charitable organization, or government agency must redouble its efforts to reduce its exposure to AI-driven insider threats. 

The Scope of the Insider Threat

Calculating the exposure of your organization to a malicious insider is a complex task. However, analyzing statistics from various industries can provide insights into the evolving risks for your business. 

Ponemon Institute’s 2023 study of 309 organizations with an insider incident revealed that 7,343 such events occurred in one year, an average of twenty-four per company. The study also showed that 12 percent of employees take sensitive intellectual property (IP) with them when they leave their job. During the pandemic, the increased use of remote workers coincided with a 200 percent hike in unsanctioned third-party work on corporate devices. 

The company that sponsored the Ponemon study contributed some of its own experiential data to the report, including investigations of 700 incidents involving departing employees stealing data from their employers in 2022, up from 350 the previous year. 

In a January 2024 report of survey results collected by another risk management company, 69 percent of employees admitted to deliberately bypassing security controls during the previous 12 months, and 93 percent of those employees knew those actions jeopardized their employer but did it anyway. 

While most data breaches result from negligence or third-party malware, 26 percent of these incidents were attributable to malicious insiders in 2022. 

How AI Enables Insiders to Damage Employers 

Identifying insiders who pose a significant threat to an organization requires analyzing those who:

  1. have access to proprietary data, customer information, or strategic market planning materials, and
  2. have had a life impacting event outside of work that impacts their integrity . In a governmental context, the greatest threat comes from those whose security clearance enables them to access secret material with the potential to harm national security or public safety. 

We know that these individuals posed risks even before the introduction of AI. 

  • The widespread availability of AI-powered programs now arms motivated bad-actors with the capacity to infiltrate an organization’s computer network with misinformation or disinformation.
  • They can create AI-generated recordings to impersonate employees with higher network access, gaining control of financial accounts, confidential personnel files, or customer records.
  • With AI-assisted falsified identification, a malicious insider can profit from insider information that will affect the company’s stock performance by accessing high-level corporate email accounts. 

Humans have always been able to harm others through disloyalty. But AI provides malicious insiders with an inanimate partner in crime whose chief characteristic is its perpetual consistency in the pursuit of its programmed mission. 

AI is not smarter than humans. It is merely more persistent. It “learns” by processing immense amounts of data to glean patterns, relationships, and rules. At the end of 2023, the AI large language models (LLM) had been trained on 45 terabytes of data from the internet. Remember, a gigabyte is about a billion bytes. A terabyte is 1,000 gigabytes, or about 1 trillion bytes. As read this, these AI programs are continuing to devour billions of additional bytes of data. 

With the rush to bring AI-assisted programs into GIG companies, healthcare systems, financial services, transportation, and security operations of all sizes, competent employees with hostile intentions or those with plans to obtain personal benefits can manipulate the data being entered into an AI-assisted program over time. Since much of the data being fed into these systems in incorrect, outdated, discredited, and intentionally falsified, these AI “hallucinations” empower fraudulent behavior.  Unless the misdirection is detected, the system can continue to build on the misinformation, further compromising the program’s performance.  

How to Protect Your Organization from AI-Assisted Insider Fraud

Only a comprehensive range of   technological and human resources can provide the minimum level  of protection necessary against the evolving AI-assisted schemes developed by motivated insiders with knowledge of and access to an organization’s computer network. 

Employers take on risk with every member of their workforce. Only some of those employees will eventually become an insider threat. New employees are initially vetted by pre-hire background screening, verifying educational or professional credentials, and by observing their performance through training and over probationary employment periods. They develop valuable skills and earn trust over time. These one-time new hires eventually become an integral part of the organization’s operation, with privileged access to closely guarded company secrets and financial assets. 

Since 2020, companies that historically relied on on-site workers have had to adjust to accommodate remote workers, including independent contractors, consultants, or specialists brought on specifically to build a secure remote network of employees. However, the scattering of coworkers who require computer access to each other and to shared files presents security challenges for businesses in every industry. 

Effective and consistent communication of company policies regarding the proper use of network connections, password security, and integrity are crucial elements of successful asset protection. But these steps alone are insufficient to close the gaps in any organization’s security. 

Knowing your people is key to accurately assessing the potential threat of insider misconduct to your business. Management must have some insight into external stressors and other negative influences that can impact the behavior of trusted members of their workforce. 

Personal crises in the life of an employee that manifest themselves through illegal activity provide valuable, relevant insight into the existence of a potential threat to the employer. Depending on the nature of the incident and the duties of the person involved, criminal activity can signal the existence of a strong incentive to act contrary to the employee’s usual character.

Charges involving financial fraud, drug use or distribution, DUI, or domestic violence are genuine crises in the personal life of any worker. Whether manufacturing line workers or corporate directors, criminal prosecutions inflict extraordinary stress on any individual and their family members. 

Just as AI and other technological advances have contributed to the threat posed by insiders with motives to benefit themselves or to harm their employer, technology has also been developed to conduct continuous screenings of every workforce member for off-duty criminal conduct, arrests, and prosecutions.

Unless a person’s arrest and prosecution are reported prominently in the press, most employers never learn of an employee’s criminal conduct. Company policies requiring self-reporting can oblige a worker to inform their employer of an arrest, but the failure to self-report won’t be discovered in most cases.  Identifying these motivational triggers is imperative to building the foundational elements of combating insider threat. 

An employee in need of tens of thousands of dollars for attorney’s fees can be more tempted to access company funds. Other criminal arrests can make an employee vulnerable to extortion or blackmail compelling them to provide unauthorized access to a company’s computer network or to exfiltrate data to outside criminal organizations. 

Such an event would do incalculable damage to the company’s reputation and destroy public trust in its ability to safeguard personal customer information. In 2024, no corporate leadership team can afford to remain in the dark about significant threats arising from a worker’s undiscovered criminal prosecution. 

Fully complying with both state and federal law, along with the consent of each employee, corporate security officers, risk management teams, and human resource departments can receive near-real-time notification of any worker’s criminal arrest and prosecution anywhere in the United States 24 hours a day. 

No security plan can fully prevent malicious insiders from accessing data or inflicting damage, unless it includes ongoing continuous workforce background screening. 

With all of its promise for increased efficiency and technological innovation, artificial intelligence must also be recognized as an evolving weapon to malicious insiders for AI-assisted fraud schemes.  Protecting an organization’s valuable assets, confidential client identifiers, and hard-earned reputation from bad actors requires a proactive and comprehensive security program.  Continuous workforce screening is an irrefutable necessity in the arsenal of protection against AI-assisted insider threats.

A version of this article first appeared in the February 2024 issues of Law Journal Newsletters and Law.com.

ㅤSHARE ON LINKEDINㅤ

Continuous Criminal Monitoring​

A safer workspace starts with one conversation. Contact us now!