Skip to main content
By Tim McCulloch, Chief Technology Officer
Reading Time: 5 minutes

As CTOs, we’re always on the hunt for the next big thing to drive innovation and efficiency in our organizations. Generative AI is the latest technical phenomenon that promises to transform workflows and boost productivity. But let’s be honest, its adoption has been slower than a Monday morning without coffee. Why? Because of the complex data and security challenges it brings along.

Data Readiness for AI

If you’re anything like me, you’ve probably spent a few sleepless nights tossing and turning on this: How can we harness the power of generative AI while keeping our organization’s data secure and private? It’s the million-dollar question we’re all trying to answer right now.

The heart of the issue really is data discovery and governance. Many organizations don’t have a clear picture of their data landscape—where sensitive information resides, who has access to it, and how it’s being used. This knowledge gap creates significant risks when integrating AI tools into our workflows.

The JPMorgan Chase AI Leak: A Cautionary Tale

Let’s dive into a real-world example that’s quite eye-opening. In February 2023, JPMorgan Chase discovered that one of its employees had used an AI tool to generate summaries of confidential financial information. The AI-generated content included invented projections, fake client meetings, and fabricated research.

This incident not only led to an internal review of AI usage policies but also highlighted the risks of using generative AI tools with sensitive financial data. The bank had to reassure clients and regulators about the integrity of its information and tighten its AI usage guidelines. You can read more on the full story here.

If it can happen at the largest bank in the world [at the time of this writing], it can happen anywhere.

Building a Foundation for Secure AI Adoption

To safely leverage generative AI, we need to establish a robust framework for data governance. So, here’s the game plan we’re guiding our clients through:

  1. Data Discovery: Conduct a thorough audit of your data assets. Identify what types of data you have, where it’s stored, and who has access to it. Think of it as a treasure hunt, but instead of gold, you’re finding sensitive data.
  2. Data Classification: Implement a tagging system to categorize data based on sensitivity and privacy requirements. It’s like putting labels on your pantry jars—so you don’t mistake salt for sugar.
  3. Access Control: Establish strict conditional access policies to ensure only authorized personnel can interact with sensitive data. Remember, not everyone needs the keys to the kingdom.
  4. Verification Processes: Implement checks and balances to verify data usage and prevent unauthorized access or leaks. Trust, but verify—just like you would with a teenager’s curfew.
  5. Continuous Monitoring: Treat data governance as an ongoing process, not a one-time effort. Regularly review and update your policies and practices. It’s like flossing; you can’t just do it once and expect perfect teeth.

Leveraging External Expertise

For many organizations, building this framework from scratch can be as daunting as assembling IKEA furniture without instructions. That’s where partnering with cybersecurity experts can be invaluable. They can help you establish best practices, implement robust data discovery processes, ensure ongoing compliance, and empower continuous monitoring that’s uniquely suited to your business landscape.

What to Look for in Continuous Monitoring Tools

When it comes to continuous monitoring tools, here are the key features I recommend you look for:

  • Business Process Orientation: The tool should align with your business processes and allow you to define custom performance metrics; you should not have to alter your business processes or workflows to accommodate the tool.
  • Anomaly Detection: It should automatically identify anomalies at a granular level based on unique rules and parameters you set so you can catch issues before they have a chance to become full-blown crises.
  • Feature and Output Behavior Tracking: The tool should track how features shift and transform over time, helping you understand and explain model behavior and know exactly how to adjust.
  • Comprehensive Data Handling: It should handle various types of data, from time series to categorical to tabular, in the cloud, on-prem, and everywhere in between, ensuring complete visibility into all the data feeding in and out of your AI system.

What to Avoid in Continuous Monitoring Tools

Avoid tools that promise to work straight out of the box or only handle a subset of the monitoring process; they may be more headache than they’re worth. Here are some other specific pitfalls to watch out for:

  • Unmanageable Alert Volumes: Tools that generate too many alerts can overwhelm your team and lead to alert fatigue. Alerts should be very carefully designed with rules in place and may be different based on your company size and complexity, technical environment, and industry.
  • Unclear Actions: Alerts should come with clear instructions on what actions need to be taken. Avoid tools that leave your team guessing.
  • Monitoring Gaps: Ensure the tool covers all critical components of your infrastructure. Missing alerts due to unmonitored areas can lead to significant issues and risks.
  • High Overhead Costs or Surprise/Hidden/Escalating Costs: Be wary of tools with high costs associated with instrumentation and ongoing maintenance or tools where your costs escalate quickly over time; be sure to explore this in detail with the vendor regarding their pricing model.
  • Lack of Integration: Tools that don’t integrate well with your existing systems can create silos and complicate your monitoring efforts, leaving some areas more vulnerable.

The Path Forward

As CTOs, it’s our responsibility to balance innovation with security. By establishing a solid foundation of data governance and security practices, we can unlock the full potential of generative AI while safeguarding our organizations’ most valuable assets.

Remember, this isn’t a one-and-done effort but an ongoing commitment to data stewardship. With the right approach, you can turn the challenge of generative AI adoption into a competitive advantage, driving your businesses forward–safely.

If you’re in need of a better night’s sleep and some help with best practices in generative AI adoption and data governance, our cybersecurity team is a great place to start. You can schedule an initial chat with one of our experts to learn more about our approach to data discovery and classification, setting access control rules, verifying your processes, and deploying best practices for data governance so you’re ready to take the AI leap.

Need help securing your organization?

Let’s talk

Contact your MicroAge Account Executive here or at (800) 544-8877 to learn how to leverage the power of Generative AI while securing your environment and data.

“As CTO, Tim McCulloch is instrumental in shaping MicroAge’s corporate IT strategy, driving the AI/ML roadmap and service adoption, ensuring corporate security compliance, leading sales engineering, evolving the company’s cybersecurity practice, and delivering exceptional value to clients.”

Tim McCullochChief Technology Officer

NEW: MicroAge Launches Best Practices Tips in Cybersecurity Webinar Series for Cybersecurity Pros

©2024 MicroAge. All Rights Reserved. Privacy Policy | Terms and Conditions | Submit Services Request