Key Takeaways
- AI tools can be powerful, but they also raise real security and privacy concerns when sensitive data is involved.
- The safest approach is to understand how tools handle information, then set clear boundaries for what should never be entered or shared
- Building “AI literacy” helps teams use these tools responsibly, so innovation does not outpace risk management.
This week, we’re pleased to feature the first in a series of guest posts from Information Technology Academic Director Cathie Wilson. Wilson is a longtime leader in the IT and database technology fields and has also worked as an IT educator for 15 years. (See her LinkedIn profile). She has led University College’s IT program since 2021.
In the cybersecurity space, it’s well known that the biggest security vulnerability in any organization is an end-user that unintentionally allows malicious access. The same is true for AI.
Unless AI end-users and employees are well educated on appropriate AI use, it’s increasingly likely that AI-driven vulnerabilities will be exposed. These vulnerabilities include sharing company data and exposing customer data.
This means that the responsibility of protecting company and customer data does not just fall on the cybersecurity team, but also every business professional by arming themselves with the knowledge required to be a responsible AI user.
The Responsible AI Institute recently published a guide on “Best Practices in Generative AI: Responsible Use and Development in the Modern Workplace,” and just last week released a great blog post titled “Getting Started with Generative AI: Opportunities and Risks.”
I believe that knowledge of AI and the opportunities and risks it presents will be essential knowledge for all professionals. At the College of Professional Studies at University of Denver, we are leaning into AI education with a new master’s or certificate in AI Strategy and Application in IT to ensure that IT professionals are prepared to be informed leaders in organizational AI strategies, with a focus on responsible AI practices.
With comprehensive education and thoughtful application, AI can become a powerful (and secure) addition to our technology toolbelt.
This is the first of a series of periodic blog posts from IT Academic Director Cathie Wilson. Watch this blog to learn more about responsible AI use in business, or find out more about how the AI Strategy and Application in IT master’s degree and graduate certificate can equip you to use the power of AI to achieve business goals.
Frequently Asked Questions
What kinds of security risks come with everyday AI tools?
The biggest risks often involve sharing sensitive information, like personal data, internal documents, or confidential plans. Once information is entered into a tool, you may lose control over where it goes and how it is retained.
How can someone use AI responsibly at work without overthinking it?
Start with a simple rule, do not enter anything you would not be comfortable seeing shared or stored. Then use AI for safer tasks like outlining, brainstorming, or rewriting content that does not include sensitive details.
What should organizations do before rolling out AI tools broadly?
They should set clear guidelines, train employees, and define which tools are approved for which use cases. A little structure early prevents messy problems later, especially around compliance and confidentiality.
What does “getting AI savvy” really mean?
It means understanding both the benefits and the blind spots, including data risks and limitations in accuracy. When people know what AI can and cannot do, they make better decisions and avoid avoidable mistakes.




