| |

Hot Topics at IT Nation Secure 2024

I’m back from last week’s IT Nation Secure conference, and I want to share just a few of my takeaways from the sessions:

  • Hacking is bad and getting worse
  • AI is growing, and companies aren’t prepared
  • Most professional businesses aren’t ready for security regulations

Hacking Stats

Bad actors are stepping up their attacks, even as many businesses start improving their security:

  • Security attempts were recorded every 39 seconds in 2023. Source: University of Maryland
  • 82% of ransomware attacks target small to medium businesses. Source: CompTIA
  • Attacks aimed at exploiting [hardware and software] vulnerabilities almost tripled (up 180% over last year) in 2023. Source: 2024 Verizon Data Breach Investigations Report
  • Password attacks increased 10-fold from 3 billion per month in 2022, to 30 billion a month in 2023! Source: Microsoft
  • If cybercrime were a nation, it would have ranked at $8 trillion in GDP annually in 2023! Source: Statistica
    • Note top countries’ 2023 GDP for perspective:
      • US: $27 trillion
      • China: $17.8 trillion
      • Germany: $4.4 trillion
      • Japan: $4.2 trillion

Are You Ready for Artificial Intelligence?

Unsurprisingly, AI was a hot topic. A lot of businesses are looking into ways to use the new technology, and it’s clear that those using AI correctly can expect to benefit from it.

Some quick research shows how various groups expect those benefits to show in the short term:

The US Chamber of Commerce says, “Using AI, businesses can manage and improve products, automate services and be proactive with customer data.” They go on to predict that it will be:

  • An affordable way to save on employee costs while maximizing availability to customers using chatbots. “These AI-powered software or plug-ins can exist on a business’s website or app 24/7, allowing unlimited opportunity for customer engagement. Chatbots are set up using existing resources (like FAQs) and scripted responses to deliver the most appropriate response to a human user’s query.”
  • Greatly reduce the time HR employees spend on screening resumes, allowing more focused time for candidates who closely match job qualifications or company needs.
  • Cybersecurity AI that reviews behavior patterns on software and stored information. “Through interpreting what employees and customers do with digital tools and how they do it, AI can recognize things that are relevant and acceptable.” And AI then flags or responds to activity that is outside of typical patterns.
  • A way collect and analyze customer information from different channels, used with customer relationship management (CRM) software.

Microsoft, in their 2024 Work Trend Index Annual Report of May, 2024, says:

  • Use of generative AI has nearly doubled in the last six months,1with 75% of global knowledge workers using it.
  • Users say AI helps them save time (90%), focus on their most important work (85%), be more creative (84%)
  • Within the next five years, 41%of leaders who are “extremely familiar” with AI expect to redesign business processes from the ground up with it
  • AI users say it makes their overwhelming workload more manageable (92%), boosts their creativity (92%), and helps them focus on the most important work (93%)
  • They are56% more likely to use AI to catch up on missed meetings, to analyze information (+51%), to design visual content (+49%), to interact with customers (+49%), and to brainstorm or problem-solve (+37%)

According to the Forbes Advisor survey, businesses are using AI across a wide range of areas.

  • The most popular applications include customer service, with 56% of respondents using AI for this purpose, and cybersecurity and fraud management, adopted by 51% of businesses.
  • Other notable uses of AI are customer relationship management (46%), digital personal assistants (47%), inventory management (40%) and content production (35%).
  • Businesses also leverage AI for product recommendations (33%), accounting (30%), supply chain operations (30%), recruitment and talent sourcing (26%) and audience segmentation (24%).
  • AI is used or planned for use in various aspects of business management. A significant number of businesses (53%) apply AI to improve production processes, while 51% adopt AI for process automation and 52% utilize it for search engine optimization tasks such as keyword research.
  • Companies are also leveraging AI for data aggregation (40%), idea generation (38%) and minimizing safety risks (38%). In addition, AI is being used to streamline internal communications, plans, presentations and reports (46%). Businesses employ AI for writing code (31%) and website copy (29%) as well.
  • AI is perceived as an asset for improving decision-making (44%), decreasing response times (53%) and avoiding mistakes (48%). Businesses also expect AI to help them save costs (59%) and streamline job processes (42%).
  • Additionally, businesses foresee AI streamlining communication with colleagues via email (46%), generating website copy (30%), fixing coding errors (41%), translating information (47%) and summarizing information (53%). Half of respondents believe ChatGPT will contribute to improved decision-making (50%) and enable the creation of content in different languages (44%).

But, as with all new technologies, there are risks along with the rewards.

Unfortunately, hackers are already leveraging AI to increase their efficiency and productivity in hacking or scamming businesses and people. Their traditional attack methods are more effective than ever, and many highly personalized and newer deepfake attacks— both video and voice—are having great success. Most businesses are not improving their cybersecurity to keep pace with this.

Worse yet, most businesses are using AI in ways that create security risks. Microsoft shared that “data security is the crucial foundation for AI adoption.” One of the risks they note in they note in the Work Trend Index Annual Report is that, “Without guidance or clearance from the top, employees are taking things into their own hands and keeping AI use under wraps:

  • 78% of AI users are bringing their own AI tools to work (BYOAI)—it’s even more common at small and medium-sized companies (80%).
  • [This] puts company data at risk in an environment where leaders’ #1concern for the year ahead is cybersecurity and data privacy.

What your risks are depend on what kind of AI you’re using:

  • Cloud services only have access to what you upload. But any confidential information you upload becomes the property of the AI service. In short, you’ve just lost control of your own business data. Even worse, over 101,000 account credentials for OpenAI’s ChatGPT have already been exposedand made available for sale on the dark web in the last year. Stolen information has also been discovered in the logs of malware, which is traded in underground marketplaces.
  • If the AI application runs on your own in-house computers, you can safely use it to sift through data, find information, and correlate data. Once AI has access to data, it will try to find it for you. However, you have to implement appropriate access so that people can only use data they should be able to see. Since AI will increasingly be built into hardware and vertical applications, this is critical to get ahead of.

After deciding that you want your company to begin exploring AI:

  • The first step in setting up AI is to train your people about how it should and should not be used.
  • Security awareness training and education can help mitigate cyber threats by teaching employees to identify potential dangers, safeguard sensitive data, and practice safe online behavior.
  • Cybersafe notes, “Today’s explosive adoption of generative artificial intelligence (AI… has been transformative for industries worldwide, but has also brought additional cybersecurity risks to the fore. These include plagiarism, misinformation, copyright infringement, leaked data, and account compromise.”
  • IBM cites employee cybersecurity awareness training as the second most effective data breach cost mitigator.
  • Undergoing security awareness training and education can also help meet regulatory requirements for minimum standards for cybersecurity practices.
  • The next step should be configuring your data so that confidential data, such as employee records, personally identifiable information (PII), and intellectual property, are all inaccessible to AI tools.

Regulatory Compliance

Privacy laws and cybersecurity regulations are becoming very common and will become increasingly so. Odds are that your business is currently subject to either an industry, state, or federal law requiring you to control client or personally identifiable information (PII) data.

The Forbes Advisor Survey notes that 31% of businesses express apprehensions about data security and privacy in the age of AI. They’re not alone.

The National Institute of Standards and Technology (NIST), “has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”

They’ve released a number of publications to help the country counter AI risks. The framework they’ve developed could become part of federal cybersecurity regulations.

Cybersecurity & Infrastructure Security Agency (CISA), says, “The security challenges associated with AI parallel cybersecurity challenges associated with previous generations of software that manufacturers did not build to be secure by design, putting the burden of security on the customer. Although AI software systems might differ from traditional forms of software, fundamental security practices still apply.”

They plan to “play a key role in addressing and managing risks at the nexus of AI, cybersecurity, and critical infrastructure.” Their hope is to “ensure AI systems are protected from cyber-based threats, and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day.”

If you aren’t yet subject to this type of regulation, you can expect it within the next two to three years.

But if you are now, it’s important to research what regulations you are subject to and then:

  • Understand the requirements
  • Create policies to implement the requirements
  • Get training for employees
  • Be sure IT configures systems correctly
  • Manage compliance
  • Stay on top of changes to regulations

Need Help?

Sagacent can help you address cybersecurity and compliance issues. Contact us today to get started.